* [bitcoin-dev] Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments
@ 2016-05-17 13:23 Peter Todd
2016-05-17 14:03 ` Jameson Lopp
` (4 more replies)
0 siblings, 5 replies; 11+ messages in thread
From: Peter Todd @ 2016-05-17 13:23 UTC (permalink / raw)
To: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 28607 bytes --]
# Motivation
UTXO growth is a serious concern for Bitcoin's long-term decentralization. To
run a competitive mining operation potentially the entire UTXO set must be in
RAM to achieve competitive latency; your larger, more centralized, competitors
will have the UTXO set in RAM. Mining is a zero-sum game, so the extra latency
of not doing so if they do directly impacts your profit margin. Secondly,
having possession of the UTXO set is one of the minimum requirements to run a
full node; the larger the set the harder it is to run a full node.
Currently the maximum size of the UTXO set is unbounded as there is no
consensus rule that limits growth, other than the block-size limit itself; as
of writing the UTXO set is 1.3GB in the on-disk, compressed serialization,
which expands to significantly more in memory. UTXO growth is driven by a
number of factors, including the fact that there is little incentive to merge
inputs, lost coins, dust outputs that can't be economically spent, and
non-btc-value-transfer "blockchain" use-cases such as anti-replay oracles and
timestamping.
We don't have good tools to combat UTXO growth. Segregated Witness proposes to
give witness space a 75% discount, in part of make reducing the UTXO set size
by spending txouts cheaper. While this may change wallets to more often spend
dust, it's hard to imagine an incentive sufficiently strong to discourage most,
let alone all, UTXO growing behavior.
For example, timestamping applications often create unspendable outputs due to
ease of implementation, and because doing so is an easy way to make sure that
the data required to reconstruct the timestamp proof won't get lost - all
Bitcoin full nodes are forced to keep a copy of it. Similarly anti-replay
use-cases like using the UTXO set for key rotation piggyback on the uniquely
strong security and decentralization guarantee that Bitcoin provides; it's very
difficult - perhaps impossible - to provide these applications with
alternatives that are equally secure. These non-btc-value-transfer use-cases
can often afford to pay far higher fees per UTXO created than competing
btc-value-transfer use-cases; many users could afford to spend $50 to register
a new PGP key, yet would rather not spend $50 in fees to create a standard two
output transaction. Effective techniques to resist miner censorship exist, so
without resorting to whitelists blocking non-btc-value-transfer use-cases as
"spam" is not a long-term, incentive compatible, solution.
A hard upper limit on UTXO set size could create a more level playing field in
the form of fixed minimum requirements to run a performant Bitcoin node, and
make the issue of UTXO "spam" less important. However, making any coins
unspendable, regardless of age or value, is a politically untenable economic
change.
# TXO Commitments
A merkle tree committing to the state of all transaction outputs, both spent
and unspent, we can provide a method of compactly proving the current state of
an output. This lets us "archive" less frequently accessed parts of the UTXO
set, allowing full nodes to discard the associated data, still providing a
mechanism to spend those archived outputs by proving to those nodes that the
outputs are in fact unspent.
Specifically TXO commitments proposes a Merkle Mountain Range¹ (MMR), a
type of deterministic, indexable, insertion ordered merkle tree, which allows
new items to be cheaply appended to the tree with minimal storage requirements,
just log2(n) "mountain tips". Once an output is added to the TXO MMR it is
never removed; if an output is spent its status is updated in place. Both the
state of a specific item in the MMR, as well the validity of changes to items
in the MMR, can be proven with log2(n) sized proofs consisting of a merkle path
to the tip of the tree.
At an extreme, with TXO commitments we could even have no UTXO set at all,
entirely eliminating the UTXO growth problem. Transactions would simply be
accompanied by TXO commitment proofs showing that the outputs they wanted to
spend were still unspent; nodes could update the state of the TXO MMR purely
from TXO commitment proofs. However, the log2(n) bandwidth overhead per txin is
substantial, so a more realistic implementation is be to have a UTXO cache for
recent transactions, with TXO commitments acting as a alternate for the (rare)
event that an old txout needs to be spent.
Proofs can be generated and added to transactions without the involvement of
the signers, even after the fact; there's no need for the proof itself to
signed and the proof is not part of the transaction hash. Anyone with access to
TXO MMR data can (re)generate missing proofs, so minimal, if any, changes are
required to wallet software to make use of TXO commitments.
## Delayed Commitments
TXO commitments aren't a new idea - the author proposed them years ago in
response to UTXO commitments. However it's critical for small miners' orphan
rates that block validation be fast, and so far it has proven difficult to
create (U)TXO implementations with acceptable performance; updating and
recalculating cryptographicly hashed merkelized datasets is inherently more
work than not doing so. Fortunately if we maintain a UTXO set for recent
outputs, TXO commitments are only needed when spending old, archived, outputs.
We can take advantage of this by delaying the commitment, allowing it to be
calculated well in advance of it actually being used, thus changing a
latency-critical task into a much easier average throughput problem.
Concretely each block B_i commits to the TXO set state as of block B_{i-n}, in
other words what the TXO commitment would have been n blocks ago, if not for
the n block delay. Since that commitment only depends on the contents of the
blockchain up until block B_{i-n}, the contents of any block after are
irrelevant to the calculation.
## Implementation
Our proposed high-performance/low-latency delayed commitment full-node
implementation needs to store the following data:
1) UTXO set
Low-latency K:V map of txouts definitely known to be unspent. Similar to
existing UTXO implementation, but with the key difference that old,
unspent, outputs may be pruned from the UTXO set.
2) STXO set
Low-latency set of transaction outputs known to have been spent by
transactions after the most recent TXO commitment, but created prior to the
TXO commitment.
3) TXO journal
FIFO of outputs that need to be marked as spent in the TXO MMR. Appends
must be low-latency; removals can be high-latency.
4) TXO MMR list
Prunable, ordered list of TXO MMR's, mainly the highest pending commitment,
backed by a reference counted, cryptographically hashed object store
indexed by digest (similar to how git repos work). High-latency ok. We'll
cover this in more in detail later.
### Fast-Path: Verifying a Txout Spend In a Block
When a transaction output is spent by a transaction in a block we have two
cases:
1) Recently created output
Output created after the most recent TXO commitment, so it should be in the
UTXO set; the transaction spending it does not need a TXO commitment proof.
Remove the output from the UTXO set and append it to the TXO journal.
2) Archived output
Output created prior to the most recent TXO commitment, so there's no
guarantee it's in the UTXO set; transaction will have a TXO commitment
proof for the most recent TXO commitment showing that it was unspent.
Check that the output isn't already in the STXO set (double-spent), and if
not add it. Append the output and TXO commitment proof to the TXO journal.
In both cases recording an output as spent requires no more than two key:value
updates, and one journal append. The existing UTXO set requires one key:value
update per spend, so we can expect new block validation latency to be within 2x
of the status quo even in the worst case of 100% archived output spends.
### Slow-Path: Calculating Pending TXO Commitments
In a low-priority background task we flush the TXO journal, recording the
outputs spent by each block in the TXO MMR, and hashing MMR data to obtain the
TXO commitment digest. Additionally this background task removes STXO's that
have been recorded in TXO commitments, and prunes TXO commitment data no longer
needed.
Throughput for the TXO commitment calculation will be worse than the existing
UTXO only scheme. This impacts bulk verification, e.g. initial block download.
That said, TXO commitments provides other possible tradeoffs that can mitigate
impact of slower validation throughput, such as skipping validation of old
history, as well as fraud proof approaches.
### TXO MMR Implementation Details
Each TXO MMR state is a modification of the previous one with most information
shared, so we an space-efficiently store a large number of TXO commitments
states, where each state is a small delta of the previous state, by sharing
unchanged data between each state; cycles are impossible in merkelized data
structures, so simple reference counting is sufficient for garbage collection.
Data no longer needed can be pruned by dropping it from the database, and
unpruned by adding it again. Since everything is committed to via cryptographic
hash, we're guaranteed that regardless of where we get the data, after
unpruning we'll have the right data.
Let's look at how the TXO MMR works in detail. Consider the following TXO MMR
with two txouts, which we'll call state #0:
0
/ \
a b
If we add another entry we get state #1:
1
/ \
0 \
/ \ \
a b c
Note how it 100% of the state #0 data was reused in commitment #1. Let's
add two more entries to get state #2:
2
/ \
2 \
/ \ \
/ \ \
/ \ \
0 2 \
/ \ / \ \
a b c d e
This time part of state #1 wasn't reused - it's wasn't a perfect binary
tree - but we've still got a lot of re-use.
Now suppose state #2 is committed into the blockchain by the most recent block.
Future transactions attempting to spend outputs created as of state #2 are
obliged to prove that they are unspent; essentially they're forced to provide
part of the state #2 MMR data. This lets us prune that data, discarding it,
leaving us with only the bare minimum data we need to append new txouts to the
TXO MMR, the tips of the perfect binary trees ("mountains") within the MMR:
2
/ \
2 \
\
\
\
\
\
e
Note that we're glossing over some nuance here about exactly what data needs to
be kept; depending on the details of the implementation the only data we need
for nodes "2" and "e" may be their hash digest.
Adding another three more txouts results in state #3:
3
/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 3
/ \
/ \
/ \
3 3
/ \ / \
e f g h
Suppose recently created txout f is spent. We have all the data required to
update the MMR, giving us state #4. It modifies two inner nodes and one leaf
node:
4
/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 4
/ \
/ \
/ \
4 3
/ \ / \
e (f) g h
If an archived txout is spent requires the transaction to provide the merkle
path to the most recently committed TXO, in our case state #2. If txout b is
spent that means the transaction must provide the following data from state #2:
2
/
2
/
/
/
0
\
b
We can add that data to our local knowledge of the TXO MMR, unpruning part of
it:
4
/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 4
/ / \
/ / \
/ / \
0 4 3
\ / \ / \
b e (f) g h
Remember, we haven't _modified_ state #4 yet; we just have more data about it.
When we mark txout b as spent we get state #5:
5
/ \
/ \
/ \
/ \
/ \
/ \
/ \
5 4
/ / \
/ / \
/ / \
5 4 3
\ / \ / \
(b) e (f) g h
Secondly by now state #3 has been committed into the chain, and transactions
that want to spend txouts created as of state #3 must provide a TXO proof
consisting of state #3 data. The leaf nodes for outputs g and h, and the inner
node above them, are part of state #3, so we prune them:
5
/ \
/ \
/ \
/ \
/ \
/ \
/ \
5 4
/ /
/ /
/ /
5 4
\ / \
(b) e (f)
Finally, lets put this all together, by spending txouts a, c, and g, and
creating three new txouts i, j, and k. State #3 was the most recently committed
state, so the transactions spending a and g are providing merkle paths up to
it. This includes part of the state #2 data:
3
/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 3
/ \ \
/ \ \
/ \ \
0 2 3
/ / /
a c g
After unpruning we have the following data for state #5:
5
/ \
/ \
/ \
/ \
/ \
/ \
/ \
5 4
/ \ / \
/ \ / \
/ \ / \
5 2 4 3
/ \ / / \ /
a (b) c e (f) g
That's sufficient to mark the three outputs as spent and add the three new
txouts, resulting in state #6:
6
/ \
/ \
/ \
/ \
/ \
6 \
/ \ \
/ \ \
/ \ \
/ \ \
/ \ \
/ \ \
/ \ \
6 6 \
/ \ / \ \
/ \ / \ 6
/ \ / \ / \
6 6 4 6 6 \
/ \ / / \ / / \ \
(a) (b) (c) e (f) (g) i j k
Again, state #4 related data can be pruned. In addition, depending on how the
STXO set is implemented may also be able to prune data related to spent txouts
after that state, including inner nodes where all txouts under them have been
spent (more on pruning spent inner nodes later).
### Consensus and Pruning
It's important to note that pruning behavior is consensus critical: a full node
that is missing data due to pruning it too soon will fall out of consensus, and
a miner that fails to include a merkle proof that is required by the consensus
is creating an invalid block. At the same time many full nodes will have
significantly more data on hand than the bare minimum so they can help wallets
make transactions spending old coins; implementations should strongly consider
separating the data that is, and isn't, strictly required for consensus.
A reasonable approach for the low-level cryptography may be to actually treat
the two cases differently, with the TXO commitments committing too what data
does and does not need to be kept on hand by the UTXO expiration rules. On the
other hand, leaving that uncommitted allows for certain types of soft-forks
where the protocol is changed to require more data than it previously did.
### Consensus Critical Storage Overheads
Only the UTXO and STXO sets need to be kept on fast random access storage.
Since STXO set entries can only be created by spending a UTXO - and are smaller
than a UTXO entry - we can guarantee that the peak size of the UTXO and STXO
sets combined will always be less than the peak size of the UTXO set alone in
the existing UTXO-only scheme (though the combined size can be temporarily
higher than what the UTXO set size alone would be when large numbers of
archived txouts are spent).
TXO journal entries and unpruned entries in the TXO MMR have log2(n) maximum
overhead per entry: a unique merkle path to a TXO commitment (by "unique" we
mean that no other entry shares data with it). On a reasonably fast system the
TXO journal will be flushed quickly, converting it into TXO MMR data; the TXO
journal will never be more than a few blocks in size.
Transactions spending non-archived txouts are not required to provide any TXO
commitment data; we must have that data on hand in the form of one TXO MMR
entry per UTXO. Once spent however the TXO MMR leaf node associated with that
non-archived txout can be immediately pruned - it's no longer in the UTXO set
so any attempt to spend it will fail; the data is now immutable and we'll never
need it again. Inner nodes in the TXO MMR can also be pruned if all leafs under
them are fully spent; detecting this is easy the TXO MMR is a merkle-sum tree,
with each inner node committing to the sum of the unspent txouts under it.
When a archived txout is spent the transaction is required to provide a merkle
path to the most recent TXO commitment. As shown above that path is sufficient
information to unprune the necessary nodes in the TXO MMR and apply the spend
immediately, reducing this case to the TXO journal size question (non-consensus
critical overhead is a different question, which we'll address in the next
section).
Taking all this into account the only significant storage overhead of our TXO
commitments scheme when compared to the status quo is the log2(n) merkle path
overhead; as long as less than 1/log2(n) of the UTXO set is active,
non-archived, UTXO's we've come out ahead, even in the unrealistic case where
all storage available is equally fast. In the real world that isn't yet the
case - even SSD's significantly slower than RAM.
### Non-Consensus Critical Storage Overheads
Transactions spending archived txouts pose two challenges:
1) Obtaining up-to-date TXO commitment proofs
2) Updating those proofs as blocks are mined
The first challenge can be handled by specialized archival nodes, not unlike
how some nodes make transaction data available to wallets via bloom filters or
the Electrum protocol. There's a whole variety of options available, and the
the data can be easily sharded to scale horizontally; the data is
self-validating allowing horizontal scaling without trust.
While miners and relay nodes don't need to be concerned about the initial
commitment proof, updating that proof is another matter. If a node aggressively
prunes old versions of the TXO MMR as it calculates pending TXO commitments, it
won't have the data available to update the TXO commitment proof to be against
the next block, when that block is found; the child nodes of the TXO MMR tip
are guaranteed to have changed, yet aggressive pruning would have discarded that
data.
Relay nodes could ignore this problem if they simply accept the fact that
they'll only be able to fully relay the transaction once, when it is initially
broadcast, and won't be able to provide mempool functionality after the initial
relay. Modulo high-latency mixnets, this is probably acceptable; the author has
previously argued that relay nodes don't need a mempool² at all.
For a miner though not having the data necessary to update the proofs as blocks
are found means potentially losing out on transactions fees. So how much extra
data is necessary to make this a non-issue?
Since the TXO MMR is insertion ordered, spending a non-archived txout can only
invalidate the upper nodes in of the archived txout's TXO MMR proof (if this
isn't clear, imagine a two-level scheme, with a per-block TXO MMRs, committed
by a master MMR for all blocks). The maximum number of relevant inner nodes
changed is log2(n) per block, so if there are n non-archival blocks between the
most recent TXO commitment and the pending TXO MMR tip, we have to store
log2(n)*n inner nodes - on the order of a few dozen MB even when n is a
(seemingly ridiculously high) year worth of blocks.
Archived txout spends on the other hand can invalidate TXO MMR proofs at any
level - consider the case of two adjacent txouts being spent. To guarantee
success requires storing full proofs. However, they're limited by the blocksize
limit, and additionally are expected to be relatively uncommon. For example, if
1% of 1MB blocks was archival spends, our hypothetical year long TXO commitment
delay is only a few hundred MB of data with low-IO-performance requirements.
## Security Model
Of course, a TXO commitment delay of a year sounds ridiculous. Even the slowest
imaginable computer isn't going to need more than a few blocks of TXO
commitment delay to keep up ~100% of the time, and there's no reason why we
can't have the UTXO archive delay be significantly longer than the TXO
commitment delay.
However, as with UTXO commitments, TXO commitments raise issues with Bitcoin's
security model by allowing relatively miners to profitably mine transactions
without bothering to validate prior history. At the extreme, if there was no
commitment delay at all at the cost of a bit of some extra network bandwidth
"full" nodes could operate and even mine blocks completely statelessly by
expecting all transactions to include "proof" that their inputs are unspent; a
TXO commitment proof for a commitment you haven't verified isn't a proof that a
transaction output is unspent, it's a proof that some miners claimed the txout
was unspent.
At one extreme, we could simply implement TXO commitments in a "virtual"
fashion, without miners actually including the TXO commitment digest in their
blocks at all. Full nodes would be forced to compute the commitment from
scratch, in the same way they are forced to compute the UTXO state, or total
work. Of course a full node operator who doesn't want to verify old history can
get a copy of the TXO state from a trusted source - no different from how you
could get a copy of the UTXO set from a trusted source.
A more pragmatic approach is to accept that people will do that anyway, and
instead assume that sufficiently old blocks are valid. But how old is
"sufficiently old"? First of all, if your full node implementation comes "from
the factory" with a reasonably up-to-date minimum accepted total-work
thresholdⁱ - in other words it won't accept a chain with less than that amount
of total work - it may be reasonable to assume any Sybil attacker with
sufficient hashing power to make a forked chain meeting that threshold with,
say, six months worth of blocks has enough hashing power to threaten the main
chain as well.
That leaves public attempts to falsify TXO commitments, done out in the open by
the majority of hashing power. In this circumstance the "assumed valid"
threshold determines how long the attack would have to go on before full nodes
start accepting the invalid chain, or at least, newly installed/recently reset
full nodes. The minimum age that we can "assume valid" is tradeoff between
political/social/technical concerns; we probably want at least a few weeks to
guarantee the defenders a chance to organise themselves.
With this in mind, a longer-than-technically-necessary TXO commitment delayʲ
may help ensure that full node software actually validates some minimum number
of blocks out-of-the-box, without taking shortcuts. However this can be
achieved in a wide variety of ways, such as the author's prev-block-proof
proposal³, fraud proofs, or even a PoW with an inner loop dependent on
blockchain data. Like UTXO commitments, TXO commitments are also potentially
very useful in reducing the need for SPV wallet software to trust third parties
providing them with transaction data.
i) Checkpoints that reject any chain without a specific block are a more
common, if uglier, way of achieving this protection.
j) A good homework problem is to figure out how the TXO commitment could be
designed such that the delay could be reduced in a soft-fork.
## Further Work
While we've shown that TXO commitments certainly could be implemented without
increasing peak IO bandwidth/block validation latency significantly with the
delayed commitment approach, we're far from being certain that they should be
implemented this way (or at all).
1) Can a TXO commitment scheme be optimized sufficiently to be used directly
without a commitment delay? Obviously it'd be preferable to avoid all the above
complexity entirely.
2) Is it possible to use a metric other than age, e.g. priority? While this
complicates the pruning logic, it could use the UTXO set space more
efficiently, especially if your goal is to prioritise bitcoin value-transfer
over other uses (though if "normal" wallets nearly never need to use TXO
commitments proofs to spend outputs, the infrastructure to actually do this may
rot).
3) Should UTXO archiving be based on a fixed size UTXO set, rather than an
age/priority/etc. threshold?
4) By fixing the problem (or possibly just "fixing" the problem) are we
encouraging/legitimising blockchain use-cases other than BTC value transfer?
Should we?
5) Instead of TXO commitment proofs counting towards the blocksize limit, can
we use a different miner fairness/decentralization metric/incentive? For
instance it might be reasonable for the TXO commitment proof size to be
discounted, or ignored entirely, if a proof-of-propagation scheme (e.g.
thinblocks) is used to ensure all miners have received the proof in advance.
6) How does this interact with fraud proofs? Obviously furthering dependency on
non-cryptographically-committed STXO/UTXO databases is incompatible with the
modularized validation approach to implementing fraud proofs.
# References
1) "Merkle Mountain Ranges",
Peter Todd, OpenTimestamps, Mar 18 2013,
https://github.com/opentimestamps/opentimestamps-server/blob/master/doc/merkle-mountain-range.md
2) "Do we really need a mempool? (for relay nodes)",
Peter Todd, bitcoin-dev mailing list, Jul 18th 2015,
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009479.html
3) "Segregated witnesses and validationless mining",
Peter Todd, bitcoin-dev mailing list, Dec 23rd 2015,
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012103.html
--
https://petertodd.org 'peter'[:-1]@petertodd.org
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [bitcoin-dev] Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments
2016-05-17 13:23 [bitcoin-dev] Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments Peter Todd
@ 2016-05-17 14:03 ` Jameson Lopp
2016-05-17 14:25 ` Eric Lombrozo
` (3 subsequent siblings)
4 siblings, 0 replies; 11+ messages in thread
From: Jameson Lopp @ 2016-05-17 14:03 UTC (permalink / raw)
To: Peter Todd, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 31647 bytes --]
Great post, Peter.
4) By fixing the problem (or possibly just "fixing" the problem) are
we encouraging/legitimising blockchain use-cases other than BTC value
transfer? Should we?
I don't think it would encourage non-value-transfer usage more
because, as you noted, many such use cases are valuable enough that
people are willing to pay much higher transaction fees in order to
have their data timestamped. I think it's more an issue of the block
space / transaction fee market since the cost of making a transaction
is directly borne by users, as opposed to the cost of the UTXO set
which may not be borne by them if they don't run a full node.
I'm of the opinion that if the world decides that Bitcoin is more
valuable as a trustworthy generalized timestamping mechanism than as a
value transfer system, protocol developers shouldn't try to steer the
ship against the wind. As more people and use cases enter the
ecosystem, the most valuable ones ought to survive - I hope that this
market will be fostered by the developers.
- Jameson
On Tue, May 17, 2016 at 9:23 AM, Peter Todd via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> # Motivation
>
> UTXO growth is a serious concern for Bitcoin's long-term decentralization.
> To
> run a competitive mining operation potentially the entire UTXO set must be
> in
> RAM to achieve competitive latency; your larger, more centralized,
> competitors
> will have the UTXO set in RAM. Mining is a zero-sum game, so the extra
> latency
> of not doing so if they do directly impacts your profit margin. Secondly,
> having possession of the UTXO set is one of the minimum requirements to
> run a
> full node; the larger the set the harder it is to run a full node.
>
> Currently the maximum size of the UTXO set is unbounded as there is no
> consensus rule that limits growth, other than the block-size limit itself;
> as
> of writing the UTXO set is 1.3GB in the on-disk, compressed serialization,
> which expands to significantly more in memory. UTXO growth is driven by a
> number of factors, including the fact that there is little incentive to
> merge
> inputs, lost coins, dust outputs that can't be economically spent, and
> non-btc-value-transfer "blockchain" use-cases such as anti-replay oracles
> and
> timestamping.
>
> We don't have good tools to combat UTXO growth. Segregated Witness
> proposes to
> give witness space a 75% discount, in part of make reducing the UTXO set
> size
> by spending txouts cheaper. While this may change wallets to more often
> spend
> dust, it's hard to imagine an incentive sufficiently strong to discourage
> most,
> let alone all, UTXO growing behavior.
>
> For example, timestamping applications often create unspendable outputs
> due to
> ease of implementation, and because doing so is an easy way to make sure
> that
> the data required to reconstruct the timestamp proof won't get lost - all
> Bitcoin full nodes are forced to keep a copy of it. Similarly anti-replay
> use-cases like using the UTXO set for key rotation piggyback on the
> uniquely
> strong security and decentralization guarantee that Bitcoin provides; it's
> very
> difficult - perhaps impossible - to provide these applications with
> alternatives that are equally secure. These non-btc-value-transfer
> use-cases
> can often afford to pay far higher fees per UTXO created than competing
> btc-value-transfer use-cases; many users could afford to spend $50 to
> register
> a new PGP key, yet would rather not spend $50 in fees to create a standard
> two
> output transaction. Effective techniques to resist miner censorship exist,
> so
> without resorting to whitelists blocking non-btc-value-transfer use-cases
> as
> "spam" is not a long-term, incentive compatible, solution.
>
> A hard upper limit on UTXO set size could create a more level playing
> field in
> the form of fixed minimum requirements to run a performant Bitcoin node,
> and
> make the issue of UTXO "spam" less important. However, making any coins
> unspendable, regardless of age or value, is a politically untenable
> economic
> change.
>
>
> # TXO Commitments
>
> A merkle tree committing to the state of all transaction outputs, both
> spent
> and unspent, we can provide a method of compactly proving the current
> state of
> an output. This lets us "archive" less frequently accessed parts of the
> UTXO
> set, allowing full nodes to discard the associated data, still providing a
> mechanism to spend those archived outputs by proving to those nodes that
> the
> outputs are in fact unspent.
>
> Specifically TXO commitments proposes a Merkle Mountain Range¹ (MMR), a
> type of deterministic, indexable, insertion ordered merkle tree, which
> allows
> new items to be cheaply appended to the tree with minimal storage
> requirements,
> just log2(n) "mountain tips". Once an output is added to the TXO MMR it is
> never removed; if an output is spent its status is updated in place. Both
> the
> state of a specific item in the MMR, as well the validity of changes to
> items
> in the MMR, can be proven with log2(n) sized proofs consisting of a merkle
> path
> to the tip of the tree.
>
> At an extreme, with TXO commitments we could even have no UTXO set at all,
> entirely eliminating the UTXO growth problem. Transactions would simply be
> accompanied by TXO commitment proofs showing that the outputs they wanted
> to
> spend were still unspent; nodes could update the state of the TXO MMR
> purely
> from TXO commitment proofs. However, the log2(n) bandwidth overhead per
> txin is
> substantial, so a more realistic implementation is be to have a UTXO cache
> for
> recent transactions, with TXO commitments acting as a alternate for the
> (rare)
> event that an old txout needs to be spent.
>
> Proofs can be generated and added to transactions without the involvement
> of
> the signers, even after the fact; there's no need for the proof itself to
> signed and the proof is not part of the transaction hash. Anyone with
> access to
> TXO MMR data can (re)generate missing proofs, so minimal, if any, changes
> are
> required to wallet software to make use of TXO commitments.
>
>
> ## Delayed Commitments
>
> TXO commitments aren't a new idea - the author proposed them years ago in
> response to UTXO commitments. However it's critical for small miners'
> orphan
> rates that block validation be fast, and so far it has proven difficult to
> create (U)TXO implementations with acceptable performance; updating and
> recalculating cryptographicly hashed merkelized datasets is inherently more
> work than not doing so. Fortunately if we maintain a UTXO set for recent
> outputs, TXO commitments are only needed when spending old, archived,
> outputs.
> We can take advantage of this by delaying the commitment, allowing it to be
> calculated well in advance of it actually being used, thus changing a
> latency-critical task into a much easier average throughput problem.
>
> Concretely each block B_i commits to the TXO set state as of block
> B_{i-n}, in
> other words what the TXO commitment would have been n blocks ago, if not
> for
> the n block delay. Since that commitment only depends on the contents of
> the
> blockchain up until block B_{i-n}, the contents of any block after are
> irrelevant to the calculation.
>
>
> ## Implementation
>
> Our proposed high-performance/low-latency delayed commitment full-node
> implementation needs to store the following data:
>
> 1) UTXO set
>
> Low-latency K:V map of txouts definitely known to be unspent. Similar
> to
> existing UTXO implementation, but with the key difference that old,
> unspent, outputs may be pruned from the UTXO set.
>
>
> 2) STXO set
>
> Low-latency set of transaction outputs known to have been spent by
> transactions after the most recent TXO commitment, but created prior
> to the
> TXO commitment.
>
>
> 3) TXO journal
>
> FIFO of outputs that need to be marked as spent in the TXO MMR. Appends
> must be low-latency; removals can be high-latency.
>
>
> 4) TXO MMR list
>
> Prunable, ordered list of TXO MMR's, mainly the highest pending
> commitment,
> backed by a reference counted, cryptographically hashed object store
> indexed by digest (similar to how git repos work). High-latency ok.
> We'll
> cover this in more in detail later.
>
>
> ### Fast-Path: Verifying a Txout Spend In a Block
>
> When a transaction output is spent by a transaction in a block we have two
> cases:
>
> 1) Recently created output
>
> Output created after the most recent TXO commitment, so it should be
> in the
> UTXO set; the transaction spending it does not need a TXO commitment
> proof.
> Remove the output from the UTXO set and append it to the TXO journal.
>
> 2) Archived output
>
> Output created prior to the most recent TXO commitment, so there's no
> guarantee it's in the UTXO set; transaction will have a TXO commitment
> proof for the most recent TXO commitment showing that it was unspent.
> Check that the output isn't already in the STXO set (double-spent),
> and if
> not add it. Append the output and TXO commitment proof to the TXO
> journal.
>
> In both cases recording an output as spent requires no more than two
> key:value
> updates, and one journal append. The existing UTXO set requires one
> key:value
> update per spend, so we can expect new block validation latency to be
> within 2x
> of the status quo even in the worst case of 100% archived output spends.
>
>
> ### Slow-Path: Calculating Pending TXO Commitments
>
> In a low-priority background task we flush the TXO journal, recording the
> outputs spent by each block in the TXO MMR, and hashing MMR data to obtain
> the
> TXO commitment digest. Additionally this background task removes STXO's
> that
> have been recorded in TXO commitments, and prunes TXO commitment data no
> longer
> needed.
>
> Throughput for the TXO commitment calculation will be worse than the
> existing
> UTXO only scheme. This impacts bulk verification, e.g. initial block
> download.
> That said, TXO commitments provides other possible tradeoffs that can
> mitigate
> impact of slower validation throughput, such as skipping validation of old
> history, as well as fraud proof approaches.
>
>
> ### TXO MMR Implementation Details
>
> Each TXO MMR state is a modification of the previous one with most
> information
> shared, so we an space-efficiently store a large number of TXO commitments
> states, where each state is a small delta of the previous state, by sharing
> unchanged data between each state; cycles are impossible in merkelized data
> structures, so simple reference counting is sufficient for garbage
> collection.
> Data no longer needed can be pruned by dropping it from the database, and
> unpruned by adding it again. Since everything is committed to via
> cryptographic
> hash, we're guaranteed that regardless of where we get the data, after
> unpruning we'll have the right data.
>
> Let's look at how the TXO MMR works in detail. Consider the following TXO
> MMR
> with two txouts, which we'll call state #0:
>
> 0
> / \
> a b
>
> If we add another entry we get state #1:
>
> 1
> / \
> 0 \
> / \ \
> a b c
>
> Note how it 100% of the state #0 data was reused in commitment #1. Let's
> add two more entries to get state #2:
>
> 2
> / \
> 2 \
> / \ \
> / \ \
> / \ \
> 0 2 \
> / \ / \ \
> a b c d e
>
> This time part of state #1 wasn't reused - it's wasn't a perfect binary
> tree - but we've still got a lot of re-use.
>
> Now suppose state #2 is committed into the blockchain by the most recent
> block.
> Future transactions attempting to spend outputs created as of state #2 are
> obliged to prove that they are unspent; essentially they're forced to
> provide
> part of the state #2 MMR data. This lets us prune that data, discarding it,
> leaving us with only the bare minimum data we need to append new txouts to
> the
> TXO MMR, the tips of the perfect binary trees ("mountains") within the MMR:
>
> 2
> / \
> 2 \
> \
> \
> \
> \
> \
> e
>
> Note that we're glossing over some nuance here about exactly what data
> needs to
> be kept; depending on the details of the implementation the only data we
> need
> for nodes "2" and "e" may be their hash digest.
>
> Adding another three more txouts results in state #3:
>
> 3
> / \
> / \
> / \
> / \
> / \
> / \
> / \
> 2 3
> / \
> / \
> / \
> 3 3
> / \ / \
> e f g h
>
> Suppose recently created txout f is spent. We have all the data required to
> update the MMR, giving us state #4. It modifies two inner nodes and one
> leaf
> node:
>
> 4
> / \
> / \
> / \
> / \
> / \
> / \
> / \
> 2 4
> / \
> / \
> / \
> 4 3
> / \ / \
> e (f) g h
>
> If an archived txout is spent requires the transaction to provide the
> merkle
> path to the most recently committed TXO, in our case state #2. If txout b
> is
> spent that means the transaction must provide the following data from
> state #2:
>
> 2
> /
> 2
> /
> /
> /
> 0
> \
> b
>
> We can add that data to our local knowledge of the TXO MMR, unpruning part
> of
> it:
>
> 4
> / \
> / \
> / \
> / \
> / \
> / \
> / \
> 2 4
> / / \
> / / \
> / / \
> 0 4 3
> \ / \ / \
> b e (f) g h
>
> Remember, we haven't _modified_ state #4 yet; we just have more data about
> it.
> When we mark txout b as spent we get state #5:
>
> 5
> / \
> / \
> / \
> / \
> / \
> / \
> / \
> 5 4
> / / \
> / / \
> / / \
> 5 4 3
> \ / \ / \
> (b) e (f) g h
>
> Secondly by now state #3 has been committed into the chain, and
> transactions
> that want to spend txouts created as of state #3 must provide a TXO proof
> consisting of state #3 data. The leaf nodes for outputs g and h, and the
> inner
> node above them, are part of state #3, so we prune them:
>
> 5
> / \
> / \
> / \
> / \
> / \
> / \
> / \
> 5 4
> / /
> / /
> / /
> 5 4
> \ / \
> (b) e (f)
>
> Finally, lets put this all together, by spending txouts a, c, and g, and
> creating three new txouts i, j, and k. State #3 was the most recently
> committed
> state, so the transactions spending a and g are providing merkle paths up
> to
> it. This includes part of the state #2 data:
>
> 3
> / \
> / \
> / \
> / \
> / \
> / \
> / \
> 2 3
> / \ \
> / \ \
> / \ \
> 0 2 3
> / / /
> a c g
>
> After unpruning we have the following data for state #5:
>
> 5
> / \
> / \
> / \
> / \
> / \
> / \
> / \
> 5 4
> / \ / \
> / \ / \
> / \ / \
> 5 2 4 3
> / \ / / \ /
> a (b) c e (f) g
>
> That's sufficient to mark the three outputs as spent and add the three new
> txouts, resulting in state #6:
>
> 6
> / \
> / \
> / \
> / \
> / \
> 6 \
> / \ \
> / \ \
> / \ \
> / \ \
> / \ \
> / \ \
> / \ \
> 6 6 \
> / \ / \ \
> / \ / \ 6
> / \ / \ / \
> 6 6 4 6 6 \
> / \ / / \ / / \ \
> (a) (b) (c) e (f) (g) i j k
>
> Again, state #4 related data can be pruned. In addition, depending on how
> the
> STXO set is implemented may also be able to prune data related to spent
> txouts
> after that state, including inner nodes where all txouts under them have
> been
> spent (more on pruning spent inner nodes later).
>
>
> ### Consensus and Pruning
>
> It's important to note that pruning behavior is consensus critical: a full
> node
> that is missing data due to pruning it too soon will fall out of
> consensus, and
> a miner that fails to include a merkle proof that is required by the
> consensus
> is creating an invalid block. At the same time many full nodes will have
> significantly more data on hand than the bare minimum so they can help
> wallets
> make transactions spending old coins; implementations should strongly
> consider
> separating the data that is, and isn't, strictly required for consensus.
>
> A reasonable approach for the low-level cryptography may be to actually
> treat
> the two cases differently, with the TXO commitments committing too what
> data
> does and does not need to be kept on hand by the UTXO expiration rules. On
> the
> other hand, leaving that uncommitted allows for certain types of soft-forks
> where the protocol is changed to require more data than it previously did.
>
>
> ### Consensus Critical Storage Overheads
>
> Only the UTXO and STXO sets need to be kept on fast random access storage.
> Since STXO set entries can only be created by spending a UTXO - and are
> smaller
> than a UTXO entry - we can guarantee that the peak size of the UTXO and
> STXO
> sets combined will always be less than the peak size of the UTXO set alone
> in
> the existing UTXO-only scheme (though the combined size can be temporarily
> higher than what the UTXO set size alone would be when large numbers of
> archived txouts are spent).
>
> TXO journal entries and unpruned entries in the TXO MMR have log2(n)
> maximum
> overhead per entry: a unique merkle path to a TXO commitment (by "unique"
> we
> mean that no other entry shares data with it). On a reasonably fast system
> the
> TXO journal will be flushed quickly, converting it into TXO MMR data; the
> TXO
> journal will never be more than a few blocks in size.
>
> Transactions spending non-archived txouts are not required to provide any
> TXO
> commitment data; we must have that data on hand in the form of one TXO MMR
> entry per UTXO. Once spent however the TXO MMR leaf node associated with
> that
> non-archived txout can be immediately pruned - it's no longer in the UTXO
> set
> so any attempt to spend it will fail; the data is now immutable and we'll
> never
> need it again. Inner nodes in the TXO MMR can also be pruned if all leafs
> under
> them are fully spent; detecting this is easy the TXO MMR is a merkle-sum
> tree,
> with each inner node committing to the sum of the unspent txouts under it.
>
> When a archived txout is spent the transaction is required to provide a
> merkle
> path to the most recent TXO commitment. As shown above that path is
> sufficient
> information to unprune the necessary nodes in the TXO MMR and apply the
> spend
> immediately, reducing this case to the TXO journal size question
> (non-consensus
> critical overhead is a different question, which we'll address in the next
> section).
>
> Taking all this into account the only significant storage overhead of our
> TXO
> commitments scheme when compared to the status quo is the log2(n) merkle
> path
> overhead; as long as less than 1/log2(n) of the UTXO set is active,
> non-archived, UTXO's we've come out ahead, even in the unrealistic case
> where
> all storage available is equally fast. In the real world that isn't yet the
> case - even SSD's significantly slower than RAM.
>
>
> ### Non-Consensus Critical Storage Overheads
>
> Transactions spending archived txouts pose two challenges:
>
> 1) Obtaining up-to-date TXO commitment proofs
>
> 2) Updating those proofs as blocks are mined
>
> The first challenge can be handled by specialized archival nodes, not
> unlike
> how some nodes make transaction data available to wallets via bloom
> filters or
> the Electrum protocol. There's a whole variety of options available, and
> the
> the data can be easily sharded to scale horizontally; the data is
> self-validating allowing horizontal scaling without trust.
>
> While miners and relay nodes don't need to be concerned about the initial
> commitment proof, updating that proof is another matter. If a node
> aggressively
> prunes old versions of the TXO MMR as it calculates pending TXO
> commitments, it
> won't have the data available to update the TXO commitment proof to be
> against
> the next block, when that block is found; the child nodes of the TXO MMR
> tip
> are guaranteed to have changed, yet aggressive pruning would have
> discarded that
> data.
>
> Relay nodes could ignore this problem if they simply accept the fact that
> they'll only be able to fully relay the transaction once, when it is
> initially
> broadcast, and won't be able to provide mempool functionality after the
> initial
> relay. Modulo high-latency mixnets, this is probably acceptable; the
> author has
> previously argued that relay nodes don't need a mempool² at all.
>
> For a miner though not having the data necessary to update the proofs as
> blocks
> are found means potentially losing out on transactions fees. So how much
> extra
> data is necessary to make this a non-issue?
>
> Since the TXO MMR is insertion ordered, spending a non-archived txout can
> only
> invalidate the upper nodes in of the archived txout's TXO MMR proof (if
> this
> isn't clear, imagine a two-level scheme, with a per-block TXO MMRs,
> committed
> by a master MMR for all blocks). The maximum number of relevant inner nodes
> changed is log2(n) per block, so if there are n non-archival blocks
> between the
> most recent TXO commitment and the pending TXO MMR tip, we have to store
> log2(n)*n inner nodes - on the order of a few dozen MB even when n is a
> (seemingly ridiculously high) year worth of blocks.
>
> Archived txout spends on the other hand can invalidate TXO MMR proofs at
> any
> level - consider the case of two adjacent txouts being spent. To guarantee
> success requires storing full proofs. However, they're limited by the
> blocksize
> limit, and additionally are expected to be relatively uncommon. For
> example, if
> 1% of 1MB blocks was archival spends, our hypothetical year long TXO
> commitment
> delay is only a few hundred MB of data with low-IO-performance
> requirements.
>
>
> ## Security Model
>
> Of course, a TXO commitment delay of a year sounds ridiculous. Even the
> slowest
> imaginable computer isn't going to need more than a few blocks of TXO
> commitment delay to keep up ~100% of the time, and there's no reason why we
> can't have the UTXO archive delay be significantly longer than the TXO
> commitment delay.
>
> However, as with UTXO commitments, TXO commitments raise issues with
> Bitcoin's
> security model by allowing relatively miners to profitably mine
> transactions
> without bothering to validate prior history. At the extreme, if there was
> no
> commitment delay at all at the cost of a bit of some extra network
> bandwidth
> "full" nodes could operate and even mine blocks completely statelessly by
> expecting all transactions to include "proof" that their inputs are
> unspent; a
> TXO commitment proof for a commitment you haven't verified isn't a proof
> that a
> transaction output is unspent, it's a proof that some miners claimed the
> txout
> was unspent.
>
> At one extreme, we could simply implement TXO commitments in a "virtual"
> fashion, without miners actually including the TXO commitment digest in
> their
> blocks at all. Full nodes would be forced to compute the commitment from
> scratch, in the same way they are forced to compute the UTXO state, or
> total
> work. Of course a full node operator who doesn't want to verify old
> history can
> get a copy of the TXO state from a trusted source - no different from how
> you
> could get a copy of the UTXO set from a trusted source.
>
> A more pragmatic approach is to accept that people will do that anyway, and
> instead assume that sufficiently old blocks are valid. But how old is
> "sufficiently old"? First of all, if your full node implementation comes
> "from
> the factory" with a reasonably up-to-date minimum accepted total-work
> thresholdⁱ - in other words it won't accept a chain with less than that
> amount
> of total work - it may be reasonable to assume any Sybil attacker with
> sufficient hashing power to make a forked chain meeting that threshold
> with,
> say, six months worth of blocks has enough hashing power to threaten the
> main
> chain as well.
>
> That leaves public attempts to falsify TXO commitments, done out in the
> open by
> the majority of hashing power. In this circumstance the "assumed valid"
> threshold determines how long the attack would have to go on before full
> nodes
> start accepting the invalid chain, or at least, newly installed/recently
> reset
> full nodes. The minimum age that we can "assume valid" is tradeoff between
> political/social/technical concerns; we probably want at least a few weeks
> to
> guarantee the defenders a chance to organise themselves.
>
> With this in mind, a longer-than-technically-necessary TXO commitment
> delayʲ
> may help ensure that full node software actually validates some minimum
> number
> of blocks out-of-the-box, without taking shortcuts. However this can be
> achieved in a wide variety of ways, such as the author's prev-block-proof
> proposal³, fraud proofs, or even a PoW with an inner loop dependent on
> blockchain data. Like UTXO commitments, TXO commitments are also
> potentially
> very useful in reducing the need for SPV wallet software to trust third
> parties
> providing them with transaction data.
>
> i) Checkpoints that reject any chain without a specific block are a more
> common, if uglier, way of achieving this protection.
>
> j) A good homework problem is to figure out how the TXO commitment could be
> designed such that the delay could be reduced in a soft-fork.
>
>
> ## Further Work
>
> While we've shown that TXO commitments certainly could be implemented
> without
> increasing peak IO bandwidth/block validation latency significantly with
> the
> delayed commitment approach, we're far from being certain that they should
> be
> implemented this way (or at all).
>
> 1) Can a TXO commitment scheme be optimized sufficiently to be used
> directly
> without a commitment delay? Obviously it'd be preferable to avoid all the
> above
> complexity entirely.
>
> 2) Is it possible to use a metric other than age, e.g. priority? While this
> complicates the pruning logic, it could use the UTXO set space more
> efficiently, especially if your goal is to prioritise bitcoin
> value-transfer
> over other uses (though if "normal" wallets nearly never need to use TXO
> commitments proofs to spend outputs, the infrastructure to actually do
> this may
> rot).
>
> 3) Should UTXO archiving be based on a fixed size UTXO set, rather than an
> age/priority/etc. threshold?
>
> 4) By fixing the problem (or possibly just "fixing" the problem) are we
> encouraging/legitimising blockchain use-cases other than BTC value
> transfer?
> Should we?
>
> 5) Instead of TXO commitment proofs counting towards the blocksize limit,
> can
> we use a different miner fairness/decentralization metric/incentive? For
> instance it might be reasonable for the TXO commitment proof size to be
> discounted, or ignored entirely, if a proof-of-propagation scheme (e.g.
> thinblocks) is used to ensure all miners have received the proof in
> advance.
>
> 6) How does this interact with fraud proofs? Obviously furthering
> dependency on
> non-cryptographically-committed STXO/UTXO databases is incompatible with
> the
> modularized validation approach to implementing fraud proofs.
>
>
> # References
>
> 1) "Merkle Mountain Ranges",
> Peter Todd, OpenTimestamps, Mar 18 2013,
>
> https://github.com/opentimestamps/opentimestamps-server/blob/master/doc/merkle-mountain-range.md
>
> 2) "Do we really need a mempool? (for relay nodes)",
> Peter Todd, bitcoin-dev mailing list, Jul 18th 2015,
>
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009479.html
>
> 3) "Segregated witnesses and validationless mining",
> Peter Todd, bitcoin-dev mailing list, Dec 23rd 2015,
>
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012103.html
>
> --
> https://petertodd.org 'peter'[:-1]@petertodd.org
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
[-- Attachment #2: Type: text/html, Size: 36413 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [bitcoin-dev] Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments
2016-05-17 13:23 [bitcoin-dev] Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments Peter Todd
2016-05-17 14:03 ` Jameson Lopp
@ 2016-05-17 14:25 ` Eric Lombrozo
2016-05-17 18:01 ` Chris Priest
[not found] ` <CABm2gDoj=6CimHm2C0H_qa=o5SRqWr0ZTGamf-qT-kUjt5WXTA@mail.gmail.com>
` (2 subsequent siblings)
4 siblings, 1 reply; 11+ messages in thread
From: Eric Lombrozo @ 2016-05-17 14:25 UTC (permalink / raw)
To: Peter Todd, Bitcoin Protocol Discussion
Nice!
We’ve been talking about doing this forever and it’s so desperately needed.
> On May 17, 2016, at 3:23 PM, Peter Todd via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> # Motivation
>
> UTXO growth is a serious concern for Bitcoin's long-term decentralization. To
> run a competitive mining operation potentially the entire UTXO set must be in
> RAM to achieve competitive latency; your larger, more centralized, competitors
> will have the UTXO set in RAM. Mining is a zero-sum game, so the extra latency
> of not doing so if they do directly impacts your profit margin. Secondly,
> having possession of the UTXO set is one of the minimum requirements to run a
> full node; the larger the set the harder it is to run a full node.
>
> Currently the maximum size of the UTXO set is unbounded as there is no
> consensus rule that limits growth, other than the block-size limit itself; as
> of writing the UTXO set is 1.3GB in the on-disk, compressed serialization,
> which expands to significantly more in memory. UTXO growth is driven by a
> number of factors, including the fact that there is little incentive to merge
> inputs, lost coins, dust outputs that can't be economically spent, and
> non-btc-value-transfer "blockchain" use-cases such as anti-replay oracles and
> timestamping.
>
> We don't have good tools to combat UTXO growth. Segregated Witness proposes to
> give witness space a 75% discount, in part of make reducing the UTXO set size
> by spending txouts cheaper. While this may change wallets to more often spend
> dust, it's hard to imagine an incentive sufficiently strong to discourage most,
> let alone all, UTXO growing behavior.
>
> For example, timestamping applications often create unspendable outputs due to
> ease of implementation, and because doing so is an easy way to make sure that
> the data required to reconstruct the timestamp proof won't get lost - all
> Bitcoin full nodes are forced to keep a copy of it. Similarly anti-replay
> use-cases like using the UTXO set for key rotation piggyback on the uniquely
> strong security and decentralization guarantee that Bitcoin provides; it's very
> difficult - perhaps impossible - to provide these applications with
> alternatives that are equally secure. These non-btc-value-transfer use-cases
> can often afford to pay far higher fees per UTXO created than competing
> btc-value-transfer use-cases; many users could afford to spend $50 to register
> a new PGP key, yet would rather not spend $50 in fees to create a standard two
> output transaction. Effective techniques to resist miner censorship exist, so
> without resorting to whitelists blocking non-btc-value-transfer use-cases as
> "spam" is not a long-term, incentive compatible, solution.
>
> A hard upper limit on UTXO set size could create a more level playing field in
> the form of fixed minimum requirements to run a performant Bitcoin node, and
> make the issue of UTXO "spam" less important. However, making any coins
> unspendable, regardless of age or value, is a politically untenable economic
> change.
>
>
> # TXO Commitments
>
> A merkle tree committing to the state of all transaction outputs, both spent
> and unspent, we can provide a method of compactly proving the current state of
> an output. This lets us "archive" less frequently accessed parts of the UTXO
> set, allowing full nodes to discard the associated data, still providing a
> mechanism to spend those archived outputs by proving to those nodes that the
> outputs are in fact unspent.
>
> Specifically TXO commitments proposes a Merkle Mountain Range¹ (MMR), a
> type of deterministic, indexable, insertion ordered merkle tree, which allows
> new items to be cheaply appended to the tree with minimal storage requirements,
> just log2(n) "mountain tips". Once an output is added to the TXO MMR it is
> never removed; if an output is spent its status is updated in place. Both the
> state of a specific item in the MMR, as well the validity of changes to items
> in the MMR, can be proven with log2(n) sized proofs consisting of a merkle path
> to the tip of the tree.
>
> At an extreme, with TXO commitments we could even have no UTXO set at all,
> entirely eliminating the UTXO growth problem. Transactions would simply be
> accompanied by TXO commitment proofs showing that the outputs they wanted to
> spend were still unspent; nodes could update the state of the TXO MMR purely
> from TXO commitment proofs. However, the log2(n) bandwidth overhead per txin is
> substantial, so a more realistic implementation is be to have a UTXO cache for
> recent transactions, with TXO commitments acting as a alternate for the (rare)
> event that an old txout needs to be spent.
>
> Proofs can be generated and added to transactions without the involvement of
> the signers, even after the fact; there's no need for the proof itself to
> signed and the proof is not part of the transaction hash. Anyone with access to
> TXO MMR data can (re)generate missing proofs, so minimal, if any, changes are
> required to wallet software to make use of TXO commitments.
>
>
> ## Delayed Commitments
>
> TXO commitments aren't a new idea - the author proposed them years ago in
> response to UTXO commitments. However it's critical for small miners' orphan
> rates that block validation be fast, and so far it has proven difficult to
> create (U)TXO implementations with acceptable performance; updating and
> recalculating cryptographicly hashed merkelized datasets is inherently more
> work than not doing so. Fortunately if we maintain a UTXO set for recent
> outputs, TXO commitments are only needed when spending old, archived, outputs.
> We can take advantage of this by delaying the commitment, allowing it to be
> calculated well in advance of it actually being used, thus changing a
> latency-critical task into a much easier average throughput problem.
>
> Concretely each block B_i commits to the TXO set state as of block B_{i-n}, in
> other words what the TXO commitment would have been n blocks ago, if not for
> the n block delay. Since that commitment only depends on the contents of the
> blockchain up until block B_{i-n}, the contents of any block after are
> irrelevant to the calculation.
>
>
> ## Implementation
>
> Our proposed high-performance/low-latency delayed commitment full-node
> implementation needs to store the following data:
>
> 1) UTXO set
>
> Low-latency K:V map of txouts definitely known to be unspent. Similar to
> existing UTXO implementation, but with the key difference that old,
> unspent, outputs may be pruned from the UTXO set.
>
>
> 2) STXO set
>
> Low-latency set of transaction outputs known to have been spent by
> transactions after the most recent TXO commitment, but created prior to the
> TXO commitment.
>
>
> 3) TXO journal
>
> FIFO of outputs that need to be marked as spent in the TXO MMR. Appends
> must be low-latency; removals can be high-latency.
>
>
> 4) TXO MMR list
>
> Prunable, ordered list of TXO MMR's, mainly the highest pending commitment,
> backed by a reference counted, cryptographically hashed object store
> indexed by digest (similar to how git repos work). High-latency ok. We'll
> cover this in more in detail later.
>
>
> ### Fast-Path: Verifying a Txout Spend In a Block
>
> When a transaction output is spent by a transaction in a block we have two
> cases:
>
> 1) Recently created output
>
> Output created after the most recent TXO commitment, so it should be in the
> UTXO set; the transaction spending it does not need a TXO commitment proof.
> Remove the output from the UTXO set and append it to the TXO journal.
>
> 2) Archived output
>
> Output created prior to the most recent TXO commitment, so there's no
> guarantee it's in the UTXO set; transaction will have a TXO commitment
> proof for the most recent TXO commitment showing that it was unspent.
> Check that the output isn't already in the STXO set (double-spent), and if
> not add it. Append the output and TXO commitment proof to the TXO journal.
>
> In both cases recording an output as spent requires no more than two key:value
> updates, and one journal append. The existing UTXO set requires one key:value
> update per spend, so we can expect new block validation latency to be within 2x
> of the status quo even in the worst case of 100% archived output spends.
>
>
> ### Slow-Path: Calculating Pending TXO Commitments
>
> In a low-priority background task we flush the TXO journal, recording the
> outputs spent by each block in the TXO MMR, and hashing MMR data to obtain the
> TXO commitment digest. Additionally this background task removes STXO's that
> have been recorded in TXO commitments, and prunes TXO commitment data no longer
> needed.
>
> Throughput for the TXO commitment calculation will be worse than the existing
> UTXO only scheme. This impacts bulk verification, e.g. initial block download.
> That said, TXO commitments provides other possible tradeoffs that can mitigate
> impact of slower validation throughput, such as skipping validation of old
> history, as well as fraud proof approaches.
>
>
> ### TXO MMR Implementation Details
>
> Each TXO MMR state is a modification of the previous one with most information
> shared, so we an space-efficiently store a large number of TXO commitments
> states, where each state is a small delta of the previous state, by sharing
> unchanged data between each state; cycles are impossible in merkelized data
> structures, so simple reference counting is sufficient for garbage collection.
> Data no longer needed can be pruned by dropping it from the database, and
> unpruned by adding it again. Since everything is committed to via cryptographic
> hash, we're guaranteed that regardless of where we get the data, after
> unpruning we'll have the right data.
>
> Let's look at how the TXO MMR works in detail. Consider the following TXO MMR
> with two txouts, which we'll call state #0:
>
> 0
> / \
> a b
>
> If we add another entry we get state #1:
>
> 1
> / \
> 0 \
> / \ \
> a b c
>
> Note how it 100% of the state #0 data was reused in commitment #1. Let's
> add two more entries to get state #2:
>
> 2
> / \
> 2 \
> / \ \
> / \ \
> / \ \
> 0 2 \
> / \ / \ \
> a b c d e
>
> This time part of state #1 wasn't reused - it's wasn't a perfect binary
> tree - but we've still got a lot of re-use.
>
> Now suppose state #2 is committed into the blockchain by the most recent block.
> Future transactions attempting to spend outputs created as of state #2 are
> obliged to prove that they are unspent; essentially they're forced to provide
> part of the state #2 MMR data. This lets us prune that data, discarding it,
> leaving us with only the bare minimum data we need to append new txouts to the
> TXO MMR, the tips of the perfect binary trees ("mountains") within the MMR:
>
> 2
> / \
> 2 \
> \
> \
> \
> \
> \
> e
>
> Note that we're glossing over some nuance here about exactly what data needs to
> be kept; depending on the details of the implementation the only data we need
> for nodes "2" and "e" may be their hash digest.
>
> Adding another three more txouts results in state #3:
>
> 3
> / \
> / \
> / \
> / \
> / \
> / \
> / \
> 2 3
> / \
> / \
> / \
> 3 3
> / \ / \
> e f g h
>
> Suppose recently created txout f is spent. We have all the data required to
> update the MMR, giving us state #4. It modifies two inner nodes and one leaf
> node:
>
> 4
> / \
> / \
> / \
> / \
> / \
> / \
> / \
> 2 4
> / \
> / \
> / \
> 4 3
> / \ / \
> e (f) g h
>
> If an archived txout is spent requires the transaction to provide the merkle
> path to the most recently committed TXO, in our case state #2. If txout b is
> spent that means the transaction must provide the following data from state #2:
>
> 2
> /
> 2
> /
> /
> /
> 0
> \
> b
>
> We can add that data to our local knowledge of the TXO MMR, unpruning part of
> it:
>
> 4
> / \
> / \
> / \
> / \
> / \
> / \
> / \
> 2 4
> / / \
> / / \
> / / \
> 0 4 3
> \ / \ / \
> b e (f) g h
>
> Remember, we haven't _modified_ state #4 yet; we just have more data about it.
> When we mark txout b as spent we get state #5:
>
> 5
> / \
> / \
> / \
> / \
> / \
> / \
> / \
> 5 4
> / / \
> / / \
> / / \
> 5 4 3
> \ / \ / \
> (b) e (f) g h
>
> Secondly by now state #3 has been committed into the chain, and transactions
> that want to spend txouts created as of state #3 must provide a TXO proof
> consisting of state #3 data. The leaf nodes for outputs g and h, and the inner
> node above them, are part of state #3, so we prune them:
>
> 5
> / \
> / \
> / \
> / \
> / \
> / \
> / \
> 5 4
> / /
> / /
> / /
> 5 4
> \ / \
> (b) e (f)
>
> Finally, lets put this all together, by spending txouts a, c, and g, and
> creating three new txouts i, j, and k. State #3 was the most recently committed
> state, so the transactions spending a and g are providing merkle paths up to
> it. This includes part of the state #2 data:
>
> 3
> / \
> / \
> / \
> / \
> / \
> / \
> / \
> 2 3
> / \ \
> / \ \
> / \ \
> 0 2 3
> / / /
> a c g
>
> After unpruning we have the following data for state #5:
>
> 5
> / \
> / \
> / \
> / \
> / \
> / \
> / \
> 5 4
> / \ / \
> / \ / \
> / \ / \
> 5 2 4 3
> / \ / / \ /
> a (b) c e (f) g
>
> That's sufficient to mark the three outputs as spent and add the three new
> txouts, resulting in state #6:
>
> 6
> / \
> / \
> / \
> / \
> / \
> 6 \
> / \ \
> / \ \
> / \ \
> / \ \
> / \ \
> / \ \
> / \ \
> 6 6 \
> / \ / \ \
> / \ / \ 6
> / \ / \ / \
> 6 6 4 6 6 \
> / \ / / \ / / \ \
> (a) (b) (c) e (f) (g) i j k
>
> Again, state #4 related data can be pruned. In addition, depending on how the
> STXO set is implemented may also be able to prune data related to spent txouts
> after that state, including inner nodes where all txouts under them have been
> spent (more on pruning spent inner nodes later).
>
>
> ### Consensus and Pruning
>
> It's important to note that pruning behavior is consensus critical: a full node
> that is missing data due to pruning it too soon will fall out of consensus, and
> a miner that fails to include a merkle proof that is required by the consensus
> is creating an invalid block. At the same time many full nodes will have
> significantly more data on hand than the bare minimum so they can help wallets
> make transactions spending old coins; implementations should strongly consider
> separating the data that is, and isn't, strictly required for consensus.
>
> A reasonable approach for the low-level cryptography may be to actually treat
> the two cases differently, with the TXO commitments committing too what data
> does and does not need to be kept on hand by the UTXO expiration rules. On the
> other hand, leaving that uncommitted allows for certain types of soft-forks
> where the protocol is changed to require more data than it previously did.
>
>
> ### Consensus Critical Storage Overheads
>
> Only the UTXO and STXO sets need to be kept on fast random access storage.
> Since STXO set entries can only be created by spending a UTXO - and are smaller
> than a UTXO entry - we can guarantee that the peak size of the UTXO and STXO
> sets combined will always be less than the peak size of the UTXO set alone in
> the existing UTXO-only scheme (though the combined size can be temporarily
> higher than what the UTXO set size alone would be when large numbers of
> archived txouts are spent).
>
> TXO journal entries and unpruned entries in the TXO MMR have log2(n) maximum
> overhead per entry: a unique merkle path to a TXO commitment (by "unique" we
> mean that no other entry shares data with it). On a reasonably fast system the
> TXO journal will be flushed quickly, converting it into TXO MMR data; the TXO
> journal will never be more than a few blocks in size.
>
> Transactions spending non-archived txouts are not required to provide any TXO
> commitment data; we must have that data on hand in the form of one TXO MMR
> entry per UTXO. Once spent however the TXO MMR leaf node associated with that
> non-archived txout can be immediately pruned - it's no longer in the UTXO set
> so any attempt to spend it will fail; the data is now immutable and we'll never
> need it again. Inner nodes in the TXO MMR can also be pruned if all leafs under
> them are fully spent; detecting this is easy the TXO MMR is a merkle-sum tree,
> with each inner node committing to the sum of the unspent txouts under it.
>
> When a archived txout is spent the transaction is required to provide a merkle
> path to the most recent TXO commitment. As shown above that path is sufficient
> information to unprune the necessary nodes in the TXO MMR and apply the spend
> immediately, reducing this case to the TXO journal size question (non-consensus
> critical overhead is a different question, which we'll address in the next
> section).
>
> Taking all this into account the only significant storage overhead of our TXO
> commitments scheme when compared to the status quo is the log2(n) merkle path
> overhead; as long as less than 1/log2(n) of the UTXO set is active,
> non-archived, UTXO's we've come out ahead, even in the unrealistic case where
> all storage available is equally fast. In the real world that isn't yet the
> case - even SSD's significantly slower than RAM.
>
>
> ### Non-Consensus Critical Storage Overheads
>
> Transactions spending archived txouts pose two challenges:
>
> 1) Obtaining up-to-date TXO commitment proofs
>
> 2) Updating those proofs as blocks are mined
>
> The first challenge can be handled by specialized archival nodes, not unlike
> how some nodes make transaction data available to wallets via bloom filters or
> the Electrum protocol. There's a whole variety of options available, and the
> the data can be easily sharded to scale horizontally; the data is
> self-validating allowing horizontal scaling without trust.
>
> While miners and relay nodes don't need to be concerned about the initial
> commitment proof, updating that proof is another matter. If a node aggressively
> prunes old versions of the TXO MMR as it calculates pending TXO commitments, it
> won't have the data available to update the TXO commitment proof to be against
> the next block, when that block is found; the child nodes of the TXO MMR tip
> are guaranteed to have changed, yet aggressive pruning would have discarded that
> data.
>
> Relay nodes could ignore this problem if they simply accept the fact that
> they'll only be able to fully relay the transaction once, when it is initially
> broadcast, and won't be able to provide mempool functionality after the initial
> relay. Modulo high-latency mixnets, this is probably acceptable; the author has
> previously argued that relay nodes don't need a mempool² at all.
>
> For a miner though not having the data necessary to update the proofs as blocks
> are found means potentially losing out on transactions fees. So how much extra
> data is necessary to make this a non-issue?
>
> Since the TXO MMR is insertion ordered, spending a non-archived txout can only
> invalidate the upper nodes in of the archived txout's TXO MMR proof (if this
> isn't clear, imagine a two-level scheme, with a per-block TXO MMRs, committed
> by a master MMR for all blocks). The maximum number of relevant inner nodes
> changed is log2(n) per block, so if there are n non-archival blocks between the
> most recent TXO commitment and the pending TXO MMR tip, we have to store
> log2(n)*n inner nodes - on the order of a few dozen MB even when n is a
> (seemingly ridiculously high) year worth of blocks.
>
> Archived txout spends on the other hand can invalidate TXO MMR proofs at any
> level - consider the case of two adjacent txouts being spent. To guarantee
> success requires storing full proofs. However, they're limited by the blocksize
> limit, and additionally are expected to be relatively uncommon. For example, if
> 1% of 1MB blocks was archival spends, our hypothetical year long TXO commitment
> delay is only a few hundred MB of data with low-IO-performance requirements.
>
>
> ## Security Model
>
> Of course, a TXO commitment delay of a year sounds ridiculous. Even the slowest
> imaginable computer isn't going to need more than a few blocks of TXO
> commitment delay to keep up ~100% of the time, and there's no reason why we
> can't have the UTXO archive delay be significantly longer than the TXO
> commitment delay.
>
> However, as with UTXO commitments, TXO commitments raise issues with Bitcoin's
> security model by allowing relatively miners to profitably mine transactions
> without bothering to validate prior history. At the extreme, if there was no
> commitment delay at all at the cost of a bit of some extra network bandwidth
> "full" nodes could operate and even mine blocks completely statelessly by
> expecting all transactions to include "proof" that their inputs are unspent; a
> TXO commitment proof for a commitment you haven't verified isn't a proof that a
> transaction output is unspent, it's a proof that some miners claimed the txout
> was unspent.
>
> At one extreme, we could simply implement TXO commitments in a "virtual"
> fashion, without miners actually including the TXO commitment digest in their
> blocks at all. Full nodes would be forced to compute the commitment from
> scratch, in the same way they are forced to compute the UTXO state, or total
> work. Of course a full node operator who doesn't want to verify old history can
> get a copy of the TXO state from a trusted source - no different from how you
> could get a copy of the UTXO set from a trusted source.
>
> A more pragmatic approach is to accept that people will do that anyway, and
> instead assume that sufficiently old blocks are valid. But how old is
> "sufficiently old"? First of all, if your full node implementation comes "from
> the factory" with a reasonably up-to-date minimum accepted total-work
> thresholdⁱ - in other words it won't accept a chain with less than that amount
> of total work - it may be reasonable to assume any Sybil attacker with
> sufficient hashing power to make a forked chain meeting that threshold with,
> say, six months worth of blocks has enough hashing power to threaten the main
> chain as well.
>
> That leaves public attempts to falsify TXO commitments, done out in the open by
> the majority of hashing power. In this circumstance the "assumed valid"
> threshold determines how long the attack would have to go on before full nodes
> start accepting the invalid chain, or at least, newly installed/recently reset
> full nodes. The minimum age that we can "assume valid" is tradeoff between
> political/social/technical concerns; we probably want at least a few weeks to
> guarantee the defenders a chance to organise themselves.
>
> With this in mind, a longer-than-technically-necessary TXO commitment delayʲ
> may help ensure that full node software actually validates some minimum number
> of blocks out-of-the-box, without taking shortcuts. However this can be
> achieved in a wide variety of ways, such as the author's prev-block-proof
> proposal³, fraud proofs, or even a PoW with an inner loop dependent on
> blockchain data. Like UTXO commitments, TXO commitments are also potentially
> very useful in reducing the need for SPV wallet software to trust third parties
> providing them with transaction data.
>
> i) Checkpoints that reject any chain without a specific block are a more
> common, if uglier, way of achieving this protection.
>
> j) A good homework problem is to figure out how the TXO commitment could be
> designed such that the delay could be reduced in a soft-fork.
>
>
> ## Further Work
>
> While we've shown that TXO commitments certainly could be implemented without
> increasing peak IO bandwidth/block validation latency significantly with the
> delayed commitment approach, we're far from being certain that they should be
> implemented this way (or at all).
>
> 1) Can a TXO commitment scheme be optimized sufficiently to be used directly
> without a commitment delay? Obviously it'd be preferable to avoid all the above
> complexity entirely.
>
> 2) Is it possible to use a metric other than age, e.g. priority? While this
> complicates the pruning logic, it could use the UTXO set space more
> efficiently, especially if your goal is to prioritise bitcoin value-transfer
> over other uses (though if "normal" wallets nearly never need to use TXO
> commitments proofs to spend outputs, the infrastructure to actually do this may
> rot).
>
> 3) Should UTXO archiving be based on a fixed size UTXO set, rather than an
> age/priority/etc. threshold?
>
> 4) By fixing the problem (or possibly just "fixing" the problem) are we
> encouraging/legitimising blockchain use-cases other than BTC value transfer?
> Should we?
>
> 5) Instead of TXO commitment proofs counting towards the blocksize limit, can
> we use a different miner fairness/decentralization metric/incentive? For
> instance it might be reasonable for the TXO commitment proof size to be
> discounted, or ignored entirely, if a proof-of-propagation scheme (e.g.
> thinblocks) is used to ensure all miners have received the proof in advance.
>
> 6) How does this interact with fraud proofs? Obviously furthering dependency on
> non-cryptographically-committed STXO/UTXO databases is incompatible with the
> modularized validation approach to implementing fraud proofs.
>
>
> # References
>
> 1) "Merkle Mountain Ranges",
> Peter Todd, OpenTimestamps, Mar 18 2013,
> https://github.com/opentimestamps/opentimestamps-server/blob/master/doc/merkle-mountain-range.md
>
> 2) "Do we really need a mempool? (for relay nodes)",
> Peter Todd, bitcoin-dev mailing list, Jul 18th 2015,
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009479.html
>
> 3) "Segregated witnesses and validationless mining",
> Peter Todd, bitcoin-dev mailing list, Dec 23rd 2015,
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012103.html
>
> --
> https://petertodd.org 'peter'[:-1]@petertodd.org
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [bitcoin-dev] Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments
2016-05-17 14:25 ` Eric Lombrozo
@ 2016-05-17 18:01 ` Chris Priest
0 siblings, 0 replies; 11+ messages in thread
From: Chris Priest @ 2016-05-17 18:01 UTC (permalink / raw)
To: Eric Lombrozo, Bitcoin Protocol Discussion
On 5/17/16, Eric Lombrozo via bitcoin-dev
<bitcoin-dev@lists.linuxfoundation.org> wrote:
> Nice!
>
> We’ve been talking about doing this forever and it’s so desperately needed.
>
"So desperately needed"? How do you figure? The UTXO set is currently
1.5 GB. What kind of computer these days doesn't have 1.5 GB of
memory? Since you people insist on keeping the blocksize limit at 1MB,
the UTXO set growth is stuck growing at a tiny rate. Most consumer
hardware sold thee days has 8GB or more RAM, it'll take decades before
the UTXO set come close to not fitting into 8 GB of memory.
Maybe 30 or 40 years from not I can see this change being "so
desperately needed" when nodes are falling off because the UTXO set is
to large, but that day is not today.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [bitcoin-dev] Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments
[not found] ` <CABm2gDqMQanaY0Eo4QAnx2MrKCSP+v31R6J80jSVx+jOwsVsVw@mail.gmail.com>
@ 2016-05-18 11:14 ` Jorge Timón
2016-05-18 23:53 ` Peter Todd
0 siblings, 1 reply; 11+ messages in thread
From: Jorge Timón @ 2016-05-18 11:14 UTC (permalink / raw)
To: Bitcoin Dev, Peter Todd
[-- Attachment #1: Type: text/plain, Size: 955 bytes --]
On May 17, 2016 15:23, "Peter Todd via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> # TXO Commitments
>
> Specifically TXO commitments proposes a Merkle Mountain Range¹ (MMR), a
> type of deterministic, indexable, insertion ordered merkle tree, which
allows
> new items to be cheaply appended to the tree with minimal storage
requirements,
> just log2(n) "mountain tips". Once an output is added to the TXO MMR it is
> never removed; if an output is spent its status is updated in place. Both
the
> state of a specific item in the MMR, as well the validity of changes to
items
> in the MMR, can be proven with log2(n) sized proofs consisting of a
merkle path
> to the tip of the tree.
How expensive it is to update a leaf from this tree from unspent to spent?
Wouldn't it be better to have both an append-only TXO and an append-only
STXO (with all spent outputs, not only the latest ones like in your "STXO")?
[-- Attachment #2: Type: text/html, Size: 1180 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [bitcoin-dev] Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments
2016-05-18 11:14 ` Jorge Timón
@ 2016-05-18 23:53 ` Peter Todd
[not found] ` <CABm2gDrXjg_nSKr-ju0jdXxmMc4N=LQFRwaVU3ix1p-T8CVKdQ@mail.gmail.com>
0 siblings, 1 reply; 11+ messages in thread
From: Peter Todd @ 2016-05-18 23:53 UTC (permalink / raw)
To: Jorge Timón; +Cc: Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 4405 bytes --]
On Wed, May 18, 2016 at 01:14:59PM +0200, Jorge Timón wrote:
> On May 17, 2016 15:23, "Peter Todd via bitcoin-dev" <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
> > # TXO Commitments
> >
>
> > Specifically TXO commitments proposes a Merkle Mountain Range¹ (MMR), a
> > type of deterministic, indexable, insertion ordered merkle tree, which
> allows
> > new items to be cheaply appended to the tree with minimal storage
> requirements,
> > just log2(n) "mountain tips". Once an output is added to the TXO MMR it is
> > never removed; if an output is spent its status is updated in place. Both
> the
> > state of a specific item in the MMR, as well the validity of changes to
> items
> > in the MMR, can be proven with log2(n) sized proofs consisting of a
> merkle path
> > to the tip of the tree.
>
> How expensive it is to update a leaf from this tree from unspent to spent?
log2(n) operations.
I wrote a full MMR implementation with pruning support as part of my
proofchains work:
https://github.com/proofchains/python-proofmarshal/blob/master/proofmarshal/mmr.py
Documentation is a bit lacking, but I'd suggest reading the above source code
and the unit tests(1) to understand what's going on. As of writing item
retrieval by index is implemented(2), and if you follow how that works you'll
see it's log2(n) operations; changing elements in-place isn't yet
implemented(3) but would be a fun homework problem. I'll bet anyone a beer that
you'll find it can be done in k*log2(n) operations, with a reasonably small k. :)
Additionally, I also have a merkelized key:value prefix tree implementation
called a "merbinner tree" in the same library, again with pruning support. It
does implement changing elements in place(4) with log2(n) operations.
Incidentally, something I probably should have made more clear in my TXO
commitments post is that the original MMR scheme I developed for OpenTimestamps
(and independently reinvented for Certificate Transparency) is insufficient:
while you can easily extract a proof that an element is present in the MMR,
that inclusion proof doesn't do a good job of proving the position in the tree
very well. OpenTimestamps didn't need that kind of proof, and I don't think
Certificate Transparency needs it either. However many other MMR applications
do, including many types of TXO commitments.
My proofchains MMR scheme fixes this problem by making each inner node in the
MMR commit to the total number of elements under it(5) - basically it's a
merkle-sum-tree with the size of the tree being what's summed. There may be
more efficient ways to do this, but a committed sum-length is easy to
implement, and the space overhead is only 25% even in the least optimised
implementation possible.
1) https://github.com/proofchains/python-proofmarshal/blob/3f0ba0a9d46f36377ad6c1901de19273604e6fbc/proofmarshal/test/test_mmr.py
2) https://github.com/proofchains/python-proofmarshal/blob/3f0ba0a9d46f36377ad6c1901de19273604e6fbc/proofmarshal/mmr.py#L294
3) https://github.com/proofchains/python-proofmarshal/blob/3f0ba0a9d46f36377ad6c1901de19273604e6fbc/proofmarshal/mmr.py#L230
4) https://github.com/proofchains/python-proofmarshal/blob/3f0ba0a9d46f36377ad6c1901de19273604e6fbc/proofmarshal/merbinnertree.py#L140
5) https://github.com/proofchains/python-proofmarshal/blob/3f0ba0a9d46f36377ad6c1901de19273604e6fbc/proofmarshal/mmr.py#L139
> Wouldn't it be better to have both an append-only TXO and an append-only
> STXO (with all spent outputs, not only the latest ones like in your "STXO")?
Nope. The reason why this doesn't work is apparent when you ask how will the
STXO be indexed?
If it's indexed by outpoint - that is H(txid:n) - to update the STXO you need
he entire thing, as the position of any new STXO that you need to add to the
STXO tree is random.
OTOH, if you index the STXO by txout creation order, with the first txout ever
created having position #0, the second #1, etc. the data you may need to update
the STXO later has predictable locality... but now you have something that's
basically identical to my proposed insertion-ordered TXO commitment anyway.
Incidentally, it's interesting how if a merbinner tree is insertion-order
indexed you end up with a datastructure that's almost identical to a MMR.
--
https://petertodd.org 'peter'[:-1]@petertodd.org
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [bitcoin-dev] Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments
[not found] ` <CABm2gDp9N3ZEZcmF28ESv3V7v_HqU5e5KHY69cSxcVm0t7BeDQ@mail.gmail.com>
@ 2016-05-19 9:31 ` Jorge Timón
0 siblings, 0 replies; 11+ messages in thread
From: Jorge Timón @ 2016-05-19 9:31 UTC (permalink / raw)
To: Peter Todd; +Cc: Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 1753 bytes --]
On May 19, 2016 01:53, "Peter Todd" <pete@petertodd.org> wrote:
tip of the tree.
> >
> > How expensive it is to update a leaf from this tree from unspent to
spent?
>
> log2(n) operations.
Updating a leaf is just as expensive as adding a new one?
That's not what I expected.
Or is adding a new one O (1) ?
Anyway, thanks, I'll read this in more detail.
> > Wouldn't it be better to have both an append-only TXO and an append-only
> > STXO (with all spent outputs, not only the latest ones like in your
"STXO")?
>
> Nope. The reason why this doesn't work is apparent when you ask how will
the
> STXO be indexed?
Just the same way the TXO is (you just stop updating the txo leafs from
unspent to spent.
> If it's indexed by outpoint - that is H(txid:n) - to update the STXO you
need
> he entire thing, as the position of any new STXO that you need to add to
the
> STXO tree is random.
>
> OTOH, if you index the STXO by txout creation order, with the first txout
ever
> created having position #0, the second #1, etc. the data you may need to
update
> the STXO later has predictable locality... but now you have something
that's
> basically identical to my proposed insertion-ordered TXO commitment
anyway.
Yeah, that's what I want. Like your append only TXO but for STXO (that way
we avoid ever updating leafs in the TXO, and I suspect there are other
advantages for fraud proofs).
> Incidentally, it's interesting how if a merbinner tree is insertion-order
> indexed you end up with a datastructure that's almost identical to a MMR.
No complain with MMR. My point is having 2 of them separated: one for the
TXO (entries unmutable) and one for the STXO (again, entries unmutable).
Maybe it doesn't make sense, but I would like to understand why.
[-- Attachment #2: Type: text/html, Size: 2176 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [bitcoin-dev] Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments
2016-05-17 13:23 [bitcoin-dev] Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments Peter Todd
` (2 preceding siblings ...)
[not found] ` <CABm2gDoj=6CimHm2C0H_qa=o5SRqWr0ZTGamf-qT-kUjt5WXTA@mail.gmail.com>
@ 2016-05-19 22:23 ` Nick ODell
2016-05-20 8:45 ` Peter Todd
2016-05-20 9:46 ` Johnson Lau
4 siblings, 1 reply; 11+ messages in thread
From: Nick ODell @ 2016-05-19 22:23 UTC (permalink / raw)
To: bitcoin-dev, Peter Todd
[-- Attachment #1: Type: text/plain, Size: 29327 bytes --]
What if two people create transactions from oupoints within the same MMR
tree tip, at the same time?
For example, I create transaction A plus an MMR proof that MMR tip X will
become Y.
On the other side of the planet, someone else creates transaction B, plus
an MMR proof that tip X will become Z.
Can a miner who receives A and B put both into a block, without access to
the outputs that were pruned?
# Motivation
UTXO growth is a serious concern for Bitcoin's long-term decentralization.
To
run a competitive mining operation potentially the entire UTXO set must be
in
RAM to achieve competitive latency; your larger, more centralized,
competitors
will have the UTXO set in RAM. Mining is a zero-sum game, so the extra
latency
of not doing so if they do directly impacts your profit margin. Secondly,
having possession of the UTXO set is one of the minimum requirements to run
a
full node; the larger the set the harder it is to run a full node.
Currently the maximum size of the UTXO set is unbounded as there is no
consensus rule that limits growth, other than the block-size limit itself;
as
of writing the UTXO set is 1.3GB in the on-disk, compressed serialization,
which expands to significantly more in memory. UTXO growth is driven by a
number of factors, including the fact that there is little incentive to
merge
inputs, lost coins, dust outputs that can't be economically spent, and
non-btc-value-transfer "blockchain" use-cases such as anti-replay oracles
and
timestamping.
We don't have good tools to combat UTXO growth. Segregated Witness proposes
to
give witness space a 75% discount, in part of make reducing the UTXO set
size
by spending txouts cheaper. While this may change wallets to more often
spend
dust, it's hard to imagine an incentive sufficiently strong to discourage
most,
let alone all, UTXO growing behavior.
For example, timestamping applications often create unspendable outputs due
to
ease of implementation, and because doing so is an easy way to make sure
that
the data required to reconstruct the timestamp proof won't get lost - all
Bitcoin full nodes are forced to keep a copy of it. Similarly anti-replay
use-cases like using the UTXO set for key rotation piggyback on the uniquely
strong security and decentralization guarantee that Bitcoin provides; it's
very
difficult - perhaps impossible - to provide these applications with
alternatives that are equally secure. These non-btc-value-transfer use-cases
can often afford to pay far higher fees per UTXO created than competing
btc-value-transfer use-cases; many users could afford to spend $50 to
register
a new PGP key, yet would rather not spend $50 in fees to create a standard
two
output transaction. Effective techniques to resist miner censorship exist,
so
without resorting to whitelists blocking non-btc-value-transfer use-cases as
"spam" is not a long-term, incentive compatible, solution.
A hard upper limit on UTXO set size could create a more level playing field
in
the form of fixed minimum requirements to run a performant Bitcoin node, and
make the issue of UTXO "spam" less important. However, making any coins
unspendable, regardless of age or value, is a politically untenable economic
change.
# TXO Commitments
A merkle tree committing to the state of all transaction outputs, both spent
and unspent, we can provide a method of compactly proving the current state
of
an output. This lets us "archive" less frequently accessed parts of the UTXO
set, allowing full nodes to discard the associated data, still providing a
mechanism to spend those archived outputs by proving to those nodes that the
outputs are in fact unspent.
Specifically TXO commitments proposes a Merkle Mountain Range¹ (MMR), a
type of deterministic, indexable, insertion ordered merkle tree, which
allows
new items to be cheaply appended to the tree with minimal storage
requirements,
just log2(n) "mountain tips". Once an output is added to the TXO MMR it is
never removed; if an output is spent its status is updated in place. Both
the
state of a specific item in the MMR, as well the validity of changes to
items
in the MMR, can be proven with log2(n) sized proofs consisting of a merkle
path
to the tip of the tree.
At an extreme, with TXO commitments we could even have no UTXO set at all,
entirely eliminating the UTXO growth problem. Transactions would simply be
accompanied by TXO commitment proofs showing that the outputs they wanted to
spend were still unspent; nodes could update the state of the TXO MMR purely
from TXO commitment proofs. However, the log2(n) bandwidth overhead per
txin is
substantial, so a more realistic implementation is be to have a UTXO cache
for
recent transactions, with TXO commitments acting as a alternate for the
(rare)
event that an old txout needs to be spent.
Proofs can be generated and added to transactions without the involvement of
the signers, even after the fact; there's no need for the proof itself to
signed and the proof is not part of the transaction hash. Anyone with
access to
TXO MMR data can (re)generate missing proofs, so minimal, if any, changes
are
required to wallet software to make use of TXO commitments.
## Delayed Commitments
TXO commitments aren't a new idea - the author proposed them years ago in
response to UTXO commitments. However it's critical for small miners' orphan
rates that block validation be fast, and so far it has proven difficult to
create (U)TXO implementations with acceptable performance; updating and
recalculating cryptographicly hashed merkelized datasets is inherently more
work than not doing so. Fortunately if we maintain a UTXO set for recent
outputs, TXO commitments are only needed when spending old, archived,
outputs.
We can take advantage of this by delaying the commitment, allowing it to be
calculated well in advance of it actually being used, thus changing a
latency-critical task into a much easier average throughput problem.
Concretely each block B_i commits to the TXO set state as of block B_{i-n},
in
other words what the TXO commitment would have been n blocks ago, if not for
the n block delay. Since that commitment only depends on the contents of the
blockchain up until block B_{i-n}, the contents of any block after are
irrelevant to the calculation.
## Implementation
Our proposed high-performance/low-latency delayed commitment full-node
implementation needs to store the following data:
1) UTXO set
Low-latency K:V map of txouts definitely known to be unspent. Similar to
existing UTXO implementation, but with the key difference that old,
unspent, outputs may be pruned from the UTXO set.
2) STXO set
Low-latency set of transaction outputs known to have been spent by
transactions after the most recent TXO commitment, but created prior to
the
TXO commitment.
3) TXO journal
FIFO of outputs that need to be marked as spent in the TXO MMR. Appends
must be low-latency; removals can be high-latency.
4) TXO MMR list
Prunable, ordered list of TXO MMR's, mainly the highest pending
commitment,
backed by a reference counted, cryptographically hashed object store
indexed by digest (similar to how git repos work). High-latency ok.
We'll
cover this in more in detail later.
### Fast-Path: Verifying a Txout Spend In a Block
When a transaction output is spent by a transaction in a block we have two
cases:
1) Recently created output
Output created after the most recent TXO commitment, so it should be in
the
UTXO set; the transaction spending it does not need a TXO commitment
proof.
Remove the output from the UTXO set and append it to the TXO journal.
2) Archived output
Output created prior to the most recent TXO commitment, so there's no
guarantee it's in the UTXO set; transaction will have a TXO commitment
proof for the most recent TXO commitment showing that it was unspent.
Check that the output isn't already in the STXO set (double-spent), and
if
not add it. Append the output and TXO commitment proof to the TXO
journal.
In both cases recording an output as spent requires no more than two
key:value
updates, and one journal append. The existing UTXO set requires one
key:value
update per spend, so we can expect new block validation latency to be
within 2x
of the status quo even in the worst case of 100% archived output spends.
### Slow-Path: Calculating Pending TXO Commitments
In a low-priority background task we flush the TXO journal, recording the
outputs spent by each block in the TXO MMR, and hashing MMR data to obtain
the
TXO commitment digest. Additionally this background task removes STXO's that
have been recorded in TXO commitments, and prunes TXO commitment data no
longer
needed.
Throughput for the TXO commitment calculation will be worse than the
existing
UTXO only scheme. This impacts bulk verification, e.g. initial block
download.
That said, TXO commitments provides other possible tradeoffs that can
mitigate
impact of slower validation throughput, such as skipping validation of old
history, as well as fraud proof approaches.
### TXO MMR Implementation Details
Each TXO MMR state is a modification of the previous one with most
information
shared, so we an space-efficiently store a large number of TXO commitments
states, where each state is a small delta of the previous state, by sharing
unchanged data between each state; cycles are impossible in merkelized data
structures, so simple reference counting is sufficient for garbage
collection.
Data no longer needed can be pruned by dropping it from the database, and
unpruned by adding it again. Since everything is committed to via
cryptographic
hash, we're guaranteed that regardless of where we get the data, after
unpruning we'll have the right data.
Let's look at how the TXO MMR works in detail. Consider the following TXO
MMR
with two txouts, which we'll call state #0:
0
/ \
a b
If we add another entry we get state #1:
1
/ \
0 \
/ \ \
a b c
Note how it 100% of the state #0 data was reused in commitment #1. Let's
add two more entries to get state #2:
2
/ \
2 \
/ \ \
/ \ \
/ \ \
0 2 \
/ \ / \ \
a b c d e
This time part of state #1 wasn't reused - it's wasn't a perfect binary
tree - but we've still got a lot of re-use.
Now suppose state #2 is committed into the blockchain by the most recent
block.
Future transactions attempting to spend outputs created as of state #2 are
obliged to prove that they are unspent; essentially they're forced to
provide
part of the state #2 MMR data. This lets us prune that data, discarding it,
leaving us with only the bare minimum data we need to append new txouts to
the
TXO MMR, the tips of the perfect binary trees ("mountains") within the MMR:
2
/ \
2 \
\
\
\
\
\
e
Note that we're glossing over some nuance here about exactly what data
needs to
be kept; depending on the details of the implementation the only data we
need
for nodes "2" and "e" may be their hash digest.
Adding another three more txouts results in state #3:
3
/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 3
/ \
/ \
/ \
3 3
/ \ / \
e f g h
Suppose recently created txout f is spent. We have all the data required to
update the MMR, giving us state #4. It modifies two inner nodes and one leaf
node:
4
/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 4
/ \
/ \
/ \
4 3
/ \ / \
e (f) g h
If an archived txout is spent requires the transaction to provide the merkle
path to the most recently committed TXO, in our case state #2. If txout b is
spent that means the transaction must provide the following data from state
#2:
2
/
2
/
/
/
0
\
b
We can add that data to our local knowledge of the TXO MMR, unpruning part
of
it:
4
/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 4
/ / \
/ / \
/ / \
0 4 3
\ / \ / \
b e (f) g h
Remember, we haven't _modified_ state #4 yet; we just have more data about
it.
When we mark txout b as spent we get state #5:
5
/ \
/ \
/ \
/ \
/ \
/ \
/ \
5 4
/ / \
/ / \
/ / \
5 4 3
\ / \ / \
(b) e (f) g h
Secondly by now state #3 has been committed into the chain, and transactions
that want to spend txouts created as of state #3 must provide a TXO proof
consisting of state #3 data. The leaf nodes for outputs g and h, and the
inner
node above them, are part of state #3, so we prune them:
5
/ \
/ \
/ \
/ \
/ \
/ \
/ \
5 4
/ /
/ /
/ /
5 4
\ / \
(b) e (f)
Finally, lets put this all together, by spending txouts a, c, and g, and
creating three new txouts i, j, and k. State #3 was the most recently
committed
state, so the transactions spending a and g are providing merkle paths up to
it. This includes part of the state #2 data:
3
/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 3
/ \ \
/ \ \
/ \ \
0 2 3
/ / /
a c g
After unpruning we have the following data for state #5:
5
/ \
/ \
/ \
/ \
/ \
/ \
/ \
5 4
/ \ / \
/ \ / \
/ \ / \
5 2 4 3
/ \ / / \ /
a (b) c e (f) g
That's sufficient to mark the three outputs as spent and add the three new
txouts, resulting in state #6:
6
/ \
/ \
/ \
/ \
/ \
6 \
/ \ \
/ \ \
/ \ \
/ \ \
/ \ \
/ \ \
/ \ \
6 6 \
/ \ / \ \
/ \ / \ 6
/ \ / \ / \
6 6 4 6 6 \
/ \ / / \ / / \ \
(a) (b) (c) e (f) (g) i j k
Again, state #4 related data can be pruned. In addition, depending on how
the
STXO set is implemented may also be able to prune data related to spent
txouts
after that state, including inner nodes where all txouts under them have
been
spent (more on pruning spent inner nodes later).
### Consensus and Pruning
It's important to note that pruning behavior is consensus critical: a full
node
that is missing data due to pruning it too soon will fall out of consensus,
and
a miner that fails to include a merkle proof that is required by the
consensus
is creating an invalid block. At the same time many full nodes will have
significantly more data on hand than the bare minimum so they can help
wallets
make transactions spending old coins; implementations should strongly
consider
separating the data that is, and isn't, strictly required for consensus.
A reasonable approach for the low-level cryptography may be to actually
treat
the two cases differently, with the TXO commitments committing too what data
does and does not need to be kept on hand by the UTXO expiration rules. On
the
other hand, leaving that uncommitted allows for certain types of soft-forks
where the protocol is changed to require more data than it previously did.
### Consensus Critical Storage Overheads
Only the UTXO and STXO sets need to be kept on fast random access storage.
Since STXO set entries can only be created by spending a UTXO - and are
smaller
than a UTXO entry - we can guarantee that the peak size of the UTXO and STXO
sets combined will always be less than the peak size of the UTXO set alone
in
the existing UTXO-only scheme (though the combined size can be temporarily
higher than what the UTXO set size alone would be when large numbers of
archived txouts are spent).
TXO journal entries and unpruned entries in the TXO MMR have log2(n) maximum
overhead per entry: a unique merkle path to a TXO commitment (by "unique" we
mean that no other entry shares data with it). On a reasonably fast system
the
TXO journal will be flushed quickly, converting it into TXO MMR data; the
TXO
journal will never be more than a few blocks in size.
Transactions spending non-archived txouts are not required to provide any
TXO
commitment data; we must have that data on hand in the form of one TXO MMR
entry per UTXO. Once spent however the TXO MMR leaf node associated with
that
non-archived txout can be immediately pruned - it's no longer in the UTXO
set
so any attempt to spend it will fail; the data is now immutable and we'll
never
need it again. Inner nodes in the TXO MMR can also be pruned if all leafs
under
them are fully spent; detecting this is easy the TXO MMR is a merkle-sum
tree,
with each inner node committing to the sum of the unspent txouts under it.
When a archived txout is spent the transaction is required to provide a
merkle
path to the most recent TXO commitment. As shown above that path is
sufficient
information to unprune the necessary nodes in the TXO MMR and apply the
spend
immediately, reducing this case to the TXO journal size question
(non-consensus
critical overhead is a different question, which we'll address in the next
section).
Taking all this into account the only significant storage overhead of our
TXO
commitments scheme when compared to the status quo is the log2(n) merkle
path
overhead; as long as less than 1/log2(n) of the UTXO set is active,
non-archived, UTXO's we've come out ahead, even in the unrealistic case
where
all storage available is equally fast. In the real world that isn't yet the
case - even SSD's significantly slower than RAM.
### Non-Consensus Critical Storage Overheads
Transactions spending archived txouts pose two challenges:
1) Obtaining up-to-date TXO commitment proofs
2) Updating those proofs as blocks are mined
The first challenge can be handled by specialized archival nodes, not unlike
how some nodes make transaction data available to wallets via bloom filters
or
the Electrum protocol. There's a whole variety of options available, and the
the data can be easily sharded to scale horizontally; the data is
self-validating allowing horizontal scaling without trust.
While miners and relay nodes don't need to be concerned about the initial
commitment proof, updating that proof is another matter. If a node
aggressively
prunes old versions of the TXO MMR as it calculates pending TXO
commitments, it
won't have the data available to update the TXO commitment proof to be
against
the next block, when that block is found; the child nodes of the TXO MMR tip
are guaranteed to have changed, yet aggressive pruning would have discarded
that
data.
Relay nodes could ignore this problem if they simply accept the fact that
they'll only be able to fully relay the transaction once, when it is
initially
broadcast, and won't be able to provide mempool functionality after the
initial
relay. Modulo high-latency mixnets, this is probably acceptable; the author
has
previously argued that relay nodes don't need a mempool² at all.
For a miner though not having the data necessary to update the proofs as
blocks
are found means potentially losing out on transactions fees. So how much
extra
data is necessary to make this a non-issue?
Since the TXO MMR is insertion ordered, spending a non-archived txout can
only
invalidate the upper nodes in of the archived txout's TXO MMR proof (if this
isn't clear, imagine a two-level scheme, with a per-block TXO MMRs,
committed
by a master MMR for all blocks). The maximum number of relevant inner nodes
changed is log2(n) per block, so if there are n non-archival blocks between
the
most recent TXO commitment and the pending TXO MMR tip, we have to store
log2(n)*n inner nodes - on the order of a few dozen MB even when n is a
(seemingly ridiculously high) year worth of blocks.
Archived txout spends on the other hand can invalidate TXO MMR proofs at any
level - consider the case of two adjacent txouts being spent. To guarantee
success requires storing full proofs. However, they're limited by the
blocksize
limit, and additionally are expected to be relatively uncommon. For
example, if
1% of 1MB blocks was archival spends, our hypothetical year long TXO
commitment
delay is only a few hundred MB of data with low-IO-performance requirements.
## Security Model
Of course, a TXO commitment delay of a year sounds ridiculous. Even the
slowest
imaginable computer isn't going to need more than a few blocks of TXO
commitment delay to keep up ~100% of the time, and there's no reason why we
can't have the UTXO archive delay be significantly longer than the TXO
commitment delay.
However, as with UTXO commitments, TXO commitments raise issues with
Bitcoin's
security model by allowing relatively miners to profitably mine transactions
without bothering to validate prior history. At the extreme, if there was no
commitment delay at all at the cost of a bit of some extra network bandwidth
"full" nodes could operate and even mine blocks completely statelessly by
expecting all transactions to include "proof" that their inputs are
unspent; a
TXO commitment proof for a commitment you haven't verified isn't a proof
that a
transaction output is unspent, it's a proof that some miners claimed the
txout
was unspent.
At one extreme, we could simply implement TXO commitments in a "virtual"
fashion, without miners actually including the TXO commitment digest in
their
blocks at all. Full nodes would be forced to compute the commitment from
scratch, in the same way they are forced to compute the UTXO state, or total
work. Of course a full node operator who doesn't want to verify old history
can
get a copy of the TXO state from a trusted source - no different from how
you
could get a copy of the UTXO set from a trusted source.
A more pragmatic approach is to accept that people will do that anyway, and
instead assume that sufficiently old blocks are valid. But how old is
"sufficiently old"? First of all, if your full node implementation comes
"from
the factory" with a reasonably up-to-date minimum accepted total-work
thresholdⁱ - in other words it won't accept a chain with less than that
amount
of total work - it may be reasonable to assume any Sybil attacker with
sufficient hashing power to make a forked chain meeting that threshold with,
say, six months worth of blocks has enough hashing power to threaten the
main
chain as well.
That leaves public attempts to falsify TXO commitments, done out in the
open by
the majority of hashing power. In this circumstance the "assumed valid"
threshold determines how long the attack would have to go on before full
nodes
start accepting the invalid chain, or at least, newly installed/recently
reset
full nodes. The minimum age that we can "assume valid" is tradeoff between
political/social/technical concerns; we probably want at least a few weeks
to
guarantee the defenders a chance to organise themselves.
With this in mind, a longer-than-technically-necessary TXO commitment delayʲ
may help ensure that full node software actually validates some minimum
number
of blocks out-of-the-box, without taking shortcuts. However this can be
achieved in a wide variety of ways, such as the author's prev-block-proof
proposal³, fraud proofs, or even a PoW with an inner loop dependent on
blockchain data. Like UTXO commitments, TXO commitments are also potentially
very useful in reducing the need for SPV wallet software to trust third
parties
providing them with transaction data.
i) Checkpoints that reject any chain without a specific block are a more
common, if uglier, way of achieving this protection.
j) A good homework problem is to figure out how the TXO commitment could be
designed such that the delay could be reduced in a soft-fork.
## Further Work
While we've shown that TXO commitments certainly could be implemented
without
increasing peak IO bandwidth/block validation latency significantly with the
delayed commitment approach, we're far from being certain that they should
be
implemented this way (or at all).
1) Can a TXO commitment scheme be optimized sufficiently to be used directly
without a commitment delay? Obviously it'd be preferable to avoid all the
above
complexity entirely.
2) Is it possible to use a metric other than age, e.g. priority? While this
complicates the pruning logic, it could use the UTXO set space more
efficiently, especially if your goal is to prioritise bitcoin value-transfer
over other uses (though if "normal" wallets nearly never need to use TXO
commitments proofs to spend outputs, the infrastructure to actually do this
may
rot).
3) Should UTXO archiving be based on a fixed size UTXO set, rather than an
age/priority/etc. threshold?
4) By fixing the problem (or possibly just "fixing" the problem) are we
encouraging/legitimising blockchain use-cases other than BTC value transfer?
Should we?
5) Instead of TXO commitment proofs counting towards the blocksize limit,
can
we use a different miner fairness/decentralization metric/incentive? For
instance it might be reasonable for the TXO commitment proof size to be
discounted, or ignored entirely, if a proof-of-propagation scheme (e.g.
thinblocks) is used to ensure all miners have received the proof in advance.
6) How does this interact with fraud proofs? Obviously furthering
dependency on
non-cryptographically-committed STXO/UTXO databases is incompatible with the
modularized validation approach to implementing fraud proofs.
# References
1) "Merkle Mountain Ranges",
Peter Todd, OpenTimestamps, Mar 18 2013,
https://github.com/opentimestamps/opentimestamps-server/blob/master/doc/merkle-mountain-range.md
2) "Do we really need a mempool? (for relay nodes)",
Peter Todd, bitcoin-dev mailing list, Jul 18th 2015,
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009479.html
3) "Segregated witnesses and validationless mining",
Peter Todd, bitcoin-dev mailing list, Dec 23rd 2015,
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012103.html
--
https://petertodd.org 'peter'[:-1]@petertodd.org
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[-- Attachment #2: Type: text/html, Size: 35012 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [bitcoin-dev] Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments
2016-05-19 22:23 ` Nick ODell
@ 2016-05-20 8:45 ` Peter Todd
0 siblings, 0 replies; 11+ messages in thread
From: Peter Todd @ 2016-05-20 8:45 UTC (permalink / raw)
To: Nick ODell; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 939 bytes --]
On Thu, May 19, 2016 at 04:23:28PM -0600, Nick ODell wrote:
> What if two people create transactions from oupoints within the same MMR
> tree tip, at the same time?
>
> For example, I create transaction A plus an MMR proof that MMR tip X will
> become Y.
>
> On the other side of the planet, someone else creates transaction B, plus
> an MMR proof that tip X will become Z.
>
> Can a miner who receives A and B put both into a block, without access to
> the outputs that were pruned?
The MMR proofs provided by transactions aren't proofs of *how* the MMR should
be be changd; they're just proofs that the MMR is in a certain state right now.
You're situation is just an example of a double-spend, that miners have to
detect if they don't want to create invalid blocks. Specifically, if I
understand your example correctly, they'd be rejected by the STXO set.
--
https://petertodd.org 'peter'[:-1]@petertodd.org
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [bitcoin-dev] Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments
2016-05-17 13:23 [bitcoin-dev] Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments Peter Todd
` (3 preceding siblings ...)
2016-05-19 22:23 ` Nick ODell
@ 2016-05-20 9:46 ` Johnson Lau
2016-05-22 8:55 ` Peter Todd
4 siblings, 1 reply; 11+ messages in thread
From: Johnson Lau @ 2016-05-20 9:46 UTC (permalink / raw)
To: Peter Todd, bitcoin-dev
[-- Attachment #1.1: Type: text/plain, Size: 1680 bytes --]
How is this compared to my earlier proposal: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/011952.html <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/011952.html> ?
In my proposal, only the (pruned) UTXO set, and 32 bytes per archived block, are required for mining. But it is probably more difficult for people to spend an archived output. They need to know the status of other archived outputs from the same block. A full re-scan of the blockchain may be needed to generate the proof but this could be done by a third party archival node.
>
>
>
> ## Implementation
>
> Our proposed high-performance/low-latency delayed commitment full-node
> implementation needs to store the following data:
>
> 1) UTXO set
>
> Low-latency K:V map of txouts definitely known to be unspent. Similar to
> existing UTXO implementation, but with the key difference that old,
> unspent, outputs may be pruned from the UTXO set.
>
>
> 2) STXO set
>
> Low-latency set of transaction outputs known to have been spent by
> transactions after the most recent TXO commitment, but created prior to the
> TXO commitment.
>
>
> 3) TXO journal
>
> FIFO of outputs that need to be marked as spent in the TXO MMR. Appends
> must be low-latency; removals can be high-latency.
>
>
> 4) TXO MMR list
>
> Prunable, ordered list of TXO MMR's, mainly the highest pending commitment,
> backed by a reference counted, cryptographically hashed object store
> indexed by digest (similar to how git repos work). High-latency ok. We'll
> cover this in more in detail later.
>
>
[-- Attachment #1.2: Type: text/html, Size: 2646 bytes --]
[-- Attachment #2: Message signed with OpenPGP using GPGMail --]
[-- Type: application/pgp-signature, Size: 671 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [bitcoin-dev] Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments
2016-05-20 9:46 ` Johnson Lau
@ 2016-05-22 8:55 ` Peter Todd
0 siblings, 0 replies; 11+ messages in thread
From: Peter Todd @ 2016-05-22 8:55 UTC (permalink / raw)
To: Johnson Lau; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 1203 bytes --]
On Fri, May 20, 2016 at 11:46:32AM +0200, Johnson Lau wrote:
> How is this compared to my earlier proposal: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/011952.html <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/011952.html> ?
>
> In my proposal, only the (pruned) UTXO set, and 32 bytes per archived block, are required for mining. But it is probably more difficult for people to spend an archived output. They need to know the status of other archived outputs from the same block. A full re-scan of the blockchain may be needed to generate the proof but this could be done by a third party archival node.
We're working along the same lines, but my proposal is much better fleshed out;
I think you'll find you missed a few details if you flesh out yours in more
detail. For instance, since your dormant UTXO list is indexed by UTXO
expiration order, it's not possible to do any kind of verification that the
contents of that commitment are correct without the global state of all UTXO
data - you have no ability to locally verify as nothing commits to the contents
of the UTXO set.
--
https://petertodd.org 'peter'[:-1]@petertodd.org
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2016-05-22 8:55 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-05-17 13:23 [bitcoin-dev] Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments Peter Todd
2016-05-17 14:03 ` Jameson Lopp
2016-05-17 14:25 ` Eric Lombrozo
2016-05-17 18:01 ` Chris Priest
[not found] ` <CABm2gDoj=6CimHm2C0H_qa=o5SRqWr0ZTGamf-qT-kUjt5WXTA@mail.gmail.com>
[not found] ` <CABm2gDqMQanaY0Eo4QAnx2MrKCSP+v31R6J80jSVx+jOwsVsVw@mail.gmail.com>
2016-05-18 11:14 ` Jorge Timón
2016-05-18 23:53 ` Peter Todd
[not found] ` <CABm2gDrXjg_nSKr-ju0jdXxmMc4N=LQFRwaVU3ix1p-T8CVKdQ@mail.gmail.com>
[not found] ` <CABm2gDrmRf9wjddiMb-TTDE0xkBJ6yMz-bW_aTpDuBvNqrnHzQ@mail.gmail.com>
[not found] ` <CABm2gDqfZh0zOqJN5itVk8eP0nshBsydzT6uryrBdRTcYqyhyA@mail.gmail.com>
[not found] ` <CABm2gDr4ZKvGt3qRPpV+iPgGbpQ5cO66M_bPn2HJPn-eYcQMOg@mail.gmail.com>
[not found] ` <CABm2gDrijEMZW1dMjGTfG-32VGvLZvX-ujP1n5mxBeVLQSsL1Q@mail.gmail.com>
[not found] ` <CABm2gDp9N3ZEZcmF28ESv3V7v_HqU5e5KHY69cSxcVm0t7BeDQ@mail.gmail.com>
2016-05-19 9:31 ` Jorge Timón
2016-05-19 22:23 ` Nick ODell
2016-05-20 8:45 ` Peter Todd
2016-05-20 9:46 ` Johnson Lau
2016-05-22 8:55 ` Peter Todd
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox