From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id 068C482B for ; Tue, 17 May 2016 14:03:29 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.7.6 Received: from mail-lb0-f172.google.com (mail-lb0-f172.google.com [209.85.217.172]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id BB1BC1AF for ; Tue, 17 May 2016 14:03:24 +0000 (UTC) Received: by mail-lb0-f172.google.com with SMTP id jj5so6364038lbc.0 for ; Tue, 17 May 2016 07:03:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to; bh=s383w1Hw6XEwcUmdapZe6BkJqFYSoEX8LmQZWXOg9Dw=; b=xHPqQumlPacnYaggsDD3OXZoZMFIxQdoAd2pt8KuEWrjb7YjzYkoDvtCPxA28qQriq WUCtZTr+btBGeZOzTgnDjGqSkf9hwFnlSZAPih1hrIgD191x5489QGm5aMBfwnWlqrer F3ILryxguYCG2QIQ87gApJiYuonQ5/2SHAESSF/p6iY6DxhGeVjXkM3NumXIz46RqgTX pVJnlnt9Hb5kgHXWFxSAzTRqy7ZDptsJ4ABPUB1LXcEltimSh+wESNQvQHklwTkdJgh6 SGZlmqvmJ3IvKfMkPVW896iBAVBdwlEWE+ofenY0mxI/zcX8V96HJp7s5uBSVXhPRcvb MakQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to; bh=s383w1Hw6XEwcUmdapZe6BkJqFYSoEX8LmQZWXOg9Dw=; b=nBYpMxxg9RObuzOjCWqeSrtms8D5ecmE2qguk8nOOVVCqP1k36UnPDHNFrAYvFpcNf H8i+giyoX0UTA+aIjqcQxQOHXslzBRr19ZHRr8UwwcKvcLLWvlUsbpR5SN+IIzKEsd1W 8k80yti4HXP0axtasFDQH9KibcDpaw7CFJ9A9gdrNFGl0HKiCOu67pAgiWJNpepysEKp tjVDn9Kwu1GeziP1y0Z5Z6Y6UICINOsu6r96bW4avBwa16oOb/Q13rLVx5QqIipuF3q0 xiaLathMV38BYM2Hl+FK2U8sis6p7qjgmWWGOIHEXrjoBkNKNJq38KkvQlm+lTWf8HrZ /y7Q== X-Gm-Message-State: AOPr4FXKtZJhjRSRc1c1v7H2bpZyYcpx/WQEpjcZhunoCoK/rZHns0HRg22aH8PB9tLbQ0MCBBOZ5uW0QTu/oA== MIME-Version: 1.0 X-Received: by 10.112.63.169 with SMTP id h9mr632479lbs.8.1463493802892; Tue, 17 May 2016 07:03:22 -0700 (PDT) Received: by 10.25.141.13 with HTTP; Tue, 17 May 2016 07:03:22 -0700 (PDT) In-Reply-To: <20160517132311.GA21656@fedora-21-dvm> References: <20160517132311.GA21656@fedora-21-dvm> Date: Tue, 17 May 2016 10:03:22 -0400 Message-ID: From: Jameson Lopp To: Peter Todd , Bitcoin Protocol Discussion Content-Type: multipart/alternative; boundary=001a11c3fd1240a58205330a34ee X-Spam-Status: No, score=-2.7 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,HTML_MESSAGE,RCVD_IN_DNSWL_LOW autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org X-Mailman-Approved-At: Tue, 17 May 2016 14:27:31 +0000 Subject: Re: [bitcoin-dev] Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Bitcoin Protocol Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 17 May 2016 14:03:29 -0000 --001a11c3fd1240a58205330a34ee Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Great post, Peter. 4) By fixing the problem (or possibly just "fixing" the problem) are we encouraging/legitimising blockchain use-cases other than BTC value transfer? Should we? I don't think it would encourage non-value-transfer usage more because, as you noted, many such use cases are valuable enough that people are willing to pay much higher transaction fees in order to have their data timestamped. I think it's more an issue of the block space / transaction fee market since the cost of making a transaction is directly borne by users, as opposed to the cost of the UTXO set which may not be borne by them if they don't run a full node. I'm of the opinion that if the world decides that Bitcoin is more valuable as a trustworthy generalized timestamping mechanism than as a value transfer system, protocol developers shouldn't try to steer the ship against the wind. As more people and use cases enter the ecosystem, the most valuable ones ought to survive - I hope that this market will be fostered by the developers. - Jameson On Tue, May 17, 2016 at 9:23 AM, Peter Todd via bitcoin-dev < bitcoin-dev@lists.linuxfoundation.org> wrote: > # Motivation > > UTXO growth is a serious concern for Bitcoin's long-term decentralization= . > To > run a competitive mining operation potentially the entire UTXO set must b= e > in > RAM to achieve competitive latency; your larger, more centralized, > competitors > will have the UTXO set in RAM. Mining is a zero-sum game, so the extra > latency > of not doing so if they do directly impacts your profit margin. Secondly, > having possession of the UTXO set is one of the minimum requirements to > run a > full node; the larger the set the harder it is to run a full node. > > Currently the maximum size of the UTXO set is unbounded as there is no > consensus rule that limits growth, other than the block-size limit itself= ; > as > of writing the UTXO set is 1.3GB in the on-disk, compressed serialization= , > which expands to significantly more in memory. UTXO growth is driven by a > number of factors, including the fact that there is little incentive to > merge > inputs, lost coins, dust outputs that can't be economically spent, and > non-btc-value-transfer "blockchain" use-cases such as anti-replay oracles > and > timestamping. > > We don't have good tools to combat UTXO growth. Segregated Witness > proposes to > give witness space a 75% discount, in part of make reducing the UTXO set > size > by spending txouts cheaper. While this may change wallets to more often > spend > dust, it's hard to imagine an incentive sufficiently strong to discourage > most, > let alone all, UTXO growing behavior. > > For example, timestamping applications often create unspendable outputs > due to > ease of implementation, and because doing so is an easy way to make sure > that > the data required to reconstruct the timestamp proof won't get lost - all > Bitcoin full nodes are forced to keep a copy of it. Similarly anti-replay > use-cases like using the UTXO set for key rotation piggyback on the > uniquely > strong security and decentralization guarantee that Bitcoin provides; it'= s > very > difficult - perhaps impossible - to provide these applications with > alternatives that are equally secure. These non-btc-value-transfer > use-cases > can often afford to pay far higher fees per UTXO created than competing > btc-value-transfer use-cases; many users could afford to spend $50 to > register > a new PGP key, yet would rather not spend $50 in fees to create a standar= d > two > output transaction. Effective techniques to resist miner censorship exist= , > so > without resorting to whitelists blocking non-btc-value-transfer use-cases > as > "spam" is not a long-term, incentive compatible, solution. > > A hard upper limit on UTXO set size could create a more level playing > field in > the form of fixed minimum requirements to run a performant Bitcoin node, > and > make the issue of UTXO "spam" less important. However, making any coins > unspendable, regardless of age or value, is a politically untenable > economic > change. > > > # TXO Commitments > > A merkle tree committing to the state of all transaction outputs, both > spent > and unspent, we can provide a method of compactly proving the current > state of > an output. This lets us "archive" less frequently accessed parts of the > UTXO > set, allowing full nodes to discard the associated data, still providing = a > mechanism to spend those archived outputs by proving to those nodes that > the > outputs are in fact unspent. > > Specifically TXO commitments proposes a Merkle Mountain Range=C2=B9 (MMR)= , a > type of deterministic, indexable, insertion ordered merkle tree, which > allows > new items to be cheaply appended to the tree with minimal storage > requirements, > just log2(n) "mountain tips". Once an output is added to the TXO MMR it i= s > never removed; if an output is spent its status is updated in place. Both > the > state of a specific item in the MMR, as well the validity of changes to > items > in the MMR, can be proven with log2(n) sized proofs consisting of a merkl= e > path > to the tip of the tree. > > At an extreme, with TXO commitments we could even have no UTXO set at all= , > entirely eliminating the UTXO growth problem. Transactions would simply b= e > accompanied by TXO commitment proofs showing that the outputs they wanted > to > spend were still unspent; nodes could update the state of the TXO MMR > purely > from TXO commitment proofs. However, the log2(n) bandwidth overhead per > txin is > substantial, so a more realistic implementation is be to have a UTXO cach= e > for > recent transactions, with TXO commitments acting as a alternate for the > (rare) > event that an old txout needs to be spent. > > Proofs can be generated and added to transactions without the involvement > of > the signers, even after the fact; there's no need for the proof itself to > signed and the proof is not part of the transaction hash. Anyone with > access to > TXO MMR data can (re)generate missing proofs, so minimal, if any, changes > are > required to wallet software to make use of TXO commitments. > > > ## Delayed Commitments > > TXO commitments aren't a new idea - the author proposed them years ago in > response to UTXO commitments. However it's critical for small miners' > orphan > rates that block validation be fast, and so far it has proven difficult t= o > create (U)TXO implementations with acceptable performance; updating and > recalculating cryptographicly hashed merkelized datasets is inherently mo= re > work than not doing so. Fortunately if we maintain a UTXO set for recent > outputs, TXO commitments are only needed when spending old, archived, > outputs. > We can take advantage of this by delaying the commitment, allowing it to = be > calculated well in advance of it actually being used, thus changing a > latency-critical task into a much easier average throughput problem. > > Concretely each block B_i commits to the TXO set state as of block > B_{i-n}, in > other words what the TXO commitment would have been n blocks ago, if not > for > the n block delay. Since that commitment only depends on the contents of > the > blockchain up until block B_{i-n}, the contents of any block after are > irrelevant to the calculation. > > > ## Implementation > > Our proposed high-performance/low-latency delayed commitment full-node > implementation needs to store the following data: > > 1) UTXO set > > Low-latency K:V map of txouts definitely known to be unspent. Similar > to > existing UTXO implementation, but with the key difference that old, > unspent, outputs may be pruned from the UTXO set. > > > 2) STXO set > > Low-latency set of transaction outputs known to have been spent by > transactions after the most recent TXO commitment, but created prior > to the > TXO commitment. > > > 3) TXO journal > > FIFO of outputs that need to be marked as spent in the TXO MMR. Appen= ds > must be low-latency; removals can be high-latency. > > > 4) TXO MMR list > > Prunable, ordered list of TXO MMR's, mainly the highest pending > commitment, > backed by a reference counted, cryptographically hashed object store > indexed by digest (similar to how git repos work). High-latency ok. > We'll > cover this in more in detail later. > > > ### Fast-Path: Verifying a Txout Spend In a Block > > When a transaction output is spent by a transaction in a block we have tw= o > cases: > > 1) Recently created output > > Output created after the most recent TXO commitment, so it should be > in the > UTXO set; the transaction spending it does not need a TXO commitment > proof. > Remove the output from the UTXO set and append it to the TXO journal. > > 2) Archived output > > Output created prior to the most recent TXO commitment, so there's no > guarantee it's in the UTXO set; transaction will have a TXO commitmen= t > proof for the most recent TXO commitment showing that it was unspent. > Check that the output isn't already in the STXO set (double-spent), > and if > not add it. Append the output and TXO commitment proof to the TXO > journal. > > In both cases recording an output as spent requires no more than two > key:value > updates, and one journal append. The existing UTXO set requires one > key:value > update per spend, so we can expect new block validation latency to be > within 2x > of the status quo even in the worst case of 100% archived output spends. > > > ### Slow-Path: Calculating Pending TXO Commitments > > In a low-priority background task we flush the TXO journal, recording the > outputs spent by each block in the TXO MMR, and hashing MMR data to obtai= n > the > TXO commitment digest. Additionally this background task removes STXO's > that > have been recorded in TXO commitments, and prunes TXO commitment data no > longer > needed. > > Throughput for the TXO commitment calculation will be worse than the > existing > UTXO only scheme. This impacts bulk verification, e.g. initial block > download. > That said, TXO commitments provides other possible tradeoffs that can > mitigate > impact of slower validation throughput, such as skipping validation of ol= d > history, as well as fraud proof approaches. > > > ### TXO MMR Implementation Details > > Each TXO MMR state is a modification of the previous one with most > information > shared, so we an space-efficiently store a large number of TXO commitment= s > states, where each state is a small delta of the previous state, by shari= ng > unchanged data between each state; cycles are impossible in merkelized da= ta > structures, so simple reference counting is sufficient for garbage > collection. > Data no longer needed can be pruned by dropping it from the database, and > unpruned by adding it again. Since everything is committed to via > cryptographic > hash, we're guaranteed that regardless of where we get the data, after > unpruning we'll have the right data. > > Let's look at how the TXO MMR works in detail. Consider the following TXO > MMR > with two txouts, which we'll call state #0: > > 0 > / \ > a b > > If we add another entry we get state #1: > > 1 > / \ > 0 \ > / \ \ > a b c > > Note how it 100% of the state #0 data was reused in commitment #1. Let's > add two more entries to get state #2: > > 2 > / \ > 2 \ > / \ \ > / \ \ > / \ \ > 0 2 \ > / \ / \ \ > a b c d e > > This time part of state #1 wasn't reused - it's wasn't a perfect binary > tree - but we've still got a lot of re-use. > > Now suppose state #2 is committed into the blockchain by the most recent > block. > Future transactions attempting to spend outputs created as of state #2 ar= e > obliged to prove that they are unspent; essentially they're forced to > provide > part of the state #2 MMR data. This lets us prune that data, discarding i= t, > leaving us with only the bare minimum data we need to append new txouts t= o > the > TXO MMR, the tips of the perfect binary trees ("mountains") within the MM= R: > > 2 > / \ > 2 \ > \ > \ > \ > \ > \ > e > > Note that we're glossing over some nuance here about exactly what data > needs to > be kept; depending on the details of the implementation the only data we > need > for nodes "2" and "e" may be their hash digest. > > Adding another three more txouts results in state #3: > > 3 > / \ > / \ > / \ > / \ > / \ > / \ > / \ > 2 3 > / \ > / \ > / \ > 3 3 > / \ / \ > e f g h > > Suppose recently created txout f is spent. We have all the data required = to > update the MMR, giving us state #4. It modifies two inner nodes and one > leaf > node: > > 4 > / \ > / \ > / \ > / \ > / \ > / \ > / \ > 2 4 > / \ > / \ > / \ > 4 3 > / \ / \ > e (f) g h > > If an archived txout is spent requires the transaction to provide the > merkle > path to the most recently committed TXO, in our case state #2. If txout b > is > spent that means the transaction must provide the following data from > state #2: > > 2 > / > 2 > / > / > / > 0 > \ > b > > We can add that data to our local knowledge of the TXO MMR, unpruning par= t > of > it: > > 4 > / \ > / \ > / \ > / \ > / \ > / \ > / \ > 2 4 > / / \ > / / \ > / / \ > 0 4 3 > \ / \ / \ > b e (f) g h > > Remember, we haven't _modified_ state #4 yet; we just have more data abou= t > it. > When we mark txout b as spent we get state #5: > > 5 > / \ > / \ > / \ > / \ > / \ > / \ > / \ > 5 4 > / / \ > / / \ > / / \ > 5 4 3 > \ / \ / \ > (b) e (f) g h > > Secondly by now state #3 has been committed into the chain, and > transactions > that want to spend txouts created as of state #3 must provide a TXO proof > consisting of state #3 data. The leaf nodes for outputs g and h, and the > inner > node above them, are part of state #3, so we prune them: > > 5 > / \ > / \ > / \ > / \ > / \ > / \ > / \ > 5 4 > / / > / / > / / > 5 4 > \ / \ > (b) e (f) > > Finally, lets put this all together, by spending txouts a, c, and g, and > creating three new txouts i, j, and k. State #3 was the most recently > committed > state, so the transactions spending a and g are providing merkle paths up > to > it. This includes part of the state #2 data: > > 3 > / \ > / \ > / \ > / \ > / \ > / \ > / \ > 2 3 > / \ \ > / \ \ > / \ \ > 0 2 3 > / / / > a c g > > After unpruning we have the following data for state #5: > > 5 > / \ > / \ > / \ > / \ > / \ > / \ > / \ > 5 4 > / \ / \ > / \ / \ > / \ / \ > 5 2 4 3 > / \ / / \ / > a (b) c e (f) g > > That's sufficient to mark the three outputs as spent and add the three ne= w > txouts, resulting in state #6: > > 6 > / \ > / \ > / \ > / \ > / \ > 6 \ > / \ \ > / \ \ > / \ \ > / \ \ > / \ \ > / \ \ > / \ \ > 6 6 \ > / \ / \ \ > / \ / \ 6 > / \ / \ / \ > 6 6 4 6 6 \ > / \ / / \ / / \ \ > (a) (b) (c) e (f) (g) i j k > > Again, state #4 related data can be pruned. In addition, depending on how > the > STXO set is implemented may also be able to prune data related to spent > txouts > after that state, including inner nodes where all txouts under them have > been > spent (more on pruning spent inner nodes later). > > > ### Consensus and Pruning > > It's important to note that pruning behavior is consensus critical: a ful= l > node > that is missing data due to pruning it too soon will fall out of > consensus, and > a miner that fails to include a merkle proof that is required by the > consensus > is creating an invalid block. At the same time many full nodes will have > significantly more data on hand than the bare minimum so they can help > wallets > make transactions spending old coins; implementations should strongly > consider > separating the data that is, and isn't, strictly required for consensus. > > A reasonable approach for the low-level cryptography may be to actually > treat > the two cases differently, with the TXO commitments committing too what > data > does and does not need to be kept on hand by the UTXO expiration rules. O= n > the > other hand, leaving that uncommitted allows for certain types of soft-for= ks > where the protocol is changed to require more data than it previously did= . > > > ### Consensus Critical Storage Overheads > > Only the UTXO and STXO sets need to be kept on fast random access storage= . > Since STXO set entries can only be created by spending a UTXO - and are > smaller > than a UTXO entry - we can guarantee that the peak size of the UTXO and > STXO > sets combined will always be less than the peak size of the UTXO set alon= e > in > the existing UTXO-only scheme (though the combined size can be temporaril= y > higher than what the UTXO set size alone would be when large numbers of > archived txouts are spent). > > TXO journal entries and unpruned entries in the TXO MMR have log2(n) > maximum > overhead per entry: a unique merkle path to a TXO commitment (by "unique" > we > mean that no other entry shares data with it). On a reasonably fast syste= m > the > TXO journal will be flushed quickly, converting it into TXO MMR data; the > TXO > journal will never be more than a few blocks in size. > > Transactions spending non-archived txouts are not required to provide any > TXO > commitment data; we must have that data on hand in the form of one TXO MM= R > entry per UTXO. Once spent however the TXO MMR leaf node associated with > that > non-archived txout can be immediately pruned - it's no longer in the UTXO > set > so any attempt to spend it will fail; the data is now immutable and we'll > never > need it again. Inner nodes in the TXO MMR can also be pruned if all leafs > under > them are fully spent; detecting this is easy the TXO MMR is a merkle-sum > tree, > with each inner node committing to the sum of the unspent txouts under it= . > > When a archived txout is spent the transaction is required to provide a > merkle > path to the most recent TXO commitment. As shown above that path is > sufficient > information to unprune the necessary nodes in the TXO MMR and apply the > spend > immediately, reducing this case to the TXO journal size question > (non-consensus > critical overhead is a different question, which we'll address in the nex= t > section). > > Taking all this into account the only significant storage overhead of our > TXO > commitments scheme when compared to the status quo is the log2(n) merkle > path > overhead; as long as less than 1/log2(n) of the UTXO set is active, > non-archived, UTXO's we've come out ahead, even in the unrealistic case > where > all storage available is equally fast. In the real world that isn't yet t= he > case - even SSD's significantly slower than RAM. > > > ### Non-Consensus Critical Storage Overheads > > Transactions spending archived txouts pose two challenges: > > 1) Obtaining up-to-date TXO commitment proofs > > 2) Updating those proofs as blocks are mined > > The first challenge can be handled by specialized archival nodes, not > unlike > how some nodes make transaction data available to wallets via bloom > filters or > the Electrum protocol. There's a whole variety of options available, and > the > the data can be easily sharded to scale horizontally; the data is > self-validating allowing horizontal scaling without trust. > > While miners and relay nodes don't need to be concerned about the initial > commitment proof, updating that proof is another matter. If a node > aggressively > prunes old versions of the TXO MMR as it calculates pending TXO > commitments, it > won't have the data available to update the TXO commitment proof to be > against > the next block, when that block is found; the child nodes of the TXO MMR > tip > are guaranteed to have changed, yet aggressive pruning would have > discarded that > data. > > Relay nodes could ignore this problem if they simply accept the fact that > they'll only be able to fully relay the transaction once, when it is > initially > broadcast, and won't be able to provide mempool functionality after the > initial > relay. Modulo high-latency mixnets, this is probably acceptable; the > author has > previously argued that relay nodes don't need a mempool=C2=B2 at all. > > For a miner though not having the data necessary to update the proofs as > blocks > are found means potentially losing out on transactions fees. So how much > extra > data is necessary to make this a non-issue? > > Since the TXO MMR is insertion ordered, spending a non-archived txout can > only > invalidate the upper nodes in of the archived txout's TXO MMR proof (if > this > isn't clear, imagine a two-level scheme, with a per-block TXO MMRs, > committed > by a master MMR for all blocks). The maximum number of relevant inner nod= es > changed is log2(n) per block, so if there are n non-archival blocks > between the > most recent TXO commitment and the pending TXO MMR tip, we have to store > log2(n)*n inner nodes - on the order of a few dozen MB even when n is a > (seemingly ridiculously high) year worth of blocks. > > Archived txout spends on the other hand can invalidate TXO MMR proofs at > any > level - consider the case of two adjacent txouts being spent. To guarante= e > success requires storing full proofs. However, they're limited by the > blocksize > limit, and additionally are expected to be relatively uncommon. For > example, if > 1% of 1MB blocks was archival spends, our hypothetical year long TXO > commitment > delay is only a few hundred MB of data with low-IO-performance > requirements. > > > ## Security Model > > Of course, a TXO commitment delay of a year sounds ridiculous. Even the > slowest > imaginable computer isn't going to need more than a few blocks of TXO > commitment delay to keep up ~100% of the time, and there's no reason why = we > can't have the UTXO archive delay be significantly longer than the TXO > commitment delay. > > However, as with UTXO commitments, TXO commitments raise issues with > Bitcoin's > security model by allowing relatively miners to profitably mine > transactions > without bothering to validate prior history. At the extreme, if there was > no > commitment delay at all at the cost of a bit of some extra network > bandwidth > "full" nodes could operate and even mine blocks completely statelessly by > expecting all transactions to include "proof" that their inputs are > unspent; a > TXO commitment proof for a commitment you haven't verified isn't a proof > that a > transaction output is unspent, it's a proof that some miners claimed the > txout > was unspent. > > At one extreme, we could simply implement TXO commitments in a "virtual" > fashion, without miners actually including the TXO commitment digest in > their > blocks at all. Full nodes would be forced to compute the commitment from > scratch, in the same way they are forced to compute the UTXO state, or > total > work. Of course a full node operator who doesn't want to verify old > history can > get a copy of the TXO state from a trusted source - no different from how > you > could get a copy of the UTXO set from a trusted source. > > A more pragmatic approach is to accept that people will do that anyway, a= nd > instead assume that sufficiently old blocks are valid. But how old is > "sufficiently old"? First of all, if your full node implementation comes > "from > the factory" with a reasonably up-to-date minimum accepted total-work > threshold=E2=81=B1 - in other words it won't accept a chain with less tha= n that > amount > of total work - it may be reasonable to assume any Sybil attacker with > sufficient hashing power to make a forked chain meeting that threshold > with, > say, six months worth of blocks has enough hashing power to threaten the > main > chain as well. > > That leaves public attempts to falsify TXO commitments, done out in the > open by > the majority of hashing power. In this circumstance the "assumed valid" > threshold determines how long the attack would have to go on before full > nodes > start accepting the invalid chain, or at least, newly installed/recently > reset > full nodes. The minimum age that we can "assume valid" is tradeoff betwee= n > political/social/technical concerns; we probably want at least a few week= s > to > guarantee the defenders a chance to organise themselves. > > With this in mind, a longer-than-technically-necessary TXO commitment > delay=CA=B2 > may help ensure that full node software actually validates some minimum > number > of blocks out-of-the-box, without taking shortcuts. However this can be > achieved in a wide variety of ways, such as the author's prev-block-proof > proposal=C2=B3, fraud proofs, or even a PoW with an inner loop dependent = on > blockchain data. Like UTXO commitments, TXO commitments are also > potentially > very useful in reducing the need for SPV wallet software to trust third > parties > providing them with transaction data. > > i) Checkpoints that reject any chain without a specific block are a more > common, if uglier, way of achieving this protection. > > j) A good homework problem is to figure out how the TXO commitment could = be > designed such that the delay could be reduced in a soft-fork. > > > ## Further Work > > While we've shown that TXO commitments certainly could be implemented > without > increasing peak IO bandwidth/block validation latency significantly with > the > delayed commitment approach, we're far from being certain that they shoul= d > be > implemented this way (or at all). > > 1) Can a TXO commitment scheme be optimized sufficiently to be used > directly > without a commitment delay? Obviously it'd be preferable to avoid all the > above > complexity entirely. > > 2) Is it possible to use a metric other than age, e.g. priority? While th= is > complicates the pruning logic, it could use the UTXO set space more > efficiently, especially if your goal is to prioritise bitcoin > value-transfer > over other uses (though if "normal" wallets nearly never need to use TXO > commitments proofs to spend outputs, the infrastructure to actually do > this may > rot). > > 3) Should UTXO archiving be based on a fixed size UTXO set, rather than a= n > age/priority/etc. threshold? > > 4) By fixing the problem (or possibly just "fixing" the problem) are we > encouraging/legitimising blockchain use-cases other than BTC value > transfer? > Should we? > > 5) Instead of TXO commitment proofs counting towards the blocksize limit, > can > we use a different miner fairness/decentralization metric/incentive? For > instance it might be reasonable for the TXO commitment proof size to be > discounted, or ignored entirely, if a proof-of-propagation scheme (e.g. > thinblocks) is used to ensure all miners have received the proof in > advance. > > 6) How does this interact with fraud proofs? Obviously furthering > dependency on > non-cryptographically-committed STXO/UTXO databases is incompatible with > the > modularized validation approach to implementing fraud proofs. > > > # References > > 1) "Merkle Mountain Ranges", > Peter Todd, OpenTimestamps, Mar 18 2013, > > https://github.com/opentimestamps/opentimestamps-server/blob/master/doc/m= erkle-mountain-range.md > > 2) "Do we really need a mempool? (for relay nodes)", > Peter Todd, bitcoin-dev mailing list, Jul 18th 2015, > > https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009479.= html > > 3) "Segregated witnesses and validationless mining", > Peter Todd, bitcoin-dev mailing list, Dec 23rd 2015, > > https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012= 103.html > > -- > https://petertodd.org 'peter'[:-1]@petertodd.org > > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > > --001a11c3fd1240a58205330a34ee Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Great post, Peter.

4) By fixing the problem (or possibly just=
 "fixing" the problem) are we encouraging/legitimising blockchain=
 use-cases other than BTC value transfer? Should we?

I don't think it would encourage non-value-transfer usage more = because, as you noted, many such use cases are valuable enough that people = are willing to pay much higher transaction fees in order to have their data= timestamped. I think it's more an issue of the block space / transacti= on fee market since the cost of making a transaction is directly borne by u= sers, as opposed to the cost of the UTXO set which may not be borne by them= if they don't run a full node.

I'm of the opinion= that if the world decides that Bitcoin is more valuable as a trustworthy g= eneralized timestamping mechanism than as a value transfer system, protocol= developers shouldn't try to steer the ship against the wind. As more p= eople and use cases enter the ecosystem, the most valuable ones ought to su= rvive - I hope that this market will be fostered by the developers.

- Jameson


On Tue, May 17, 2016 at 9:23 AM, Peter Todd via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
# Motivation

UTXO growth is a serious concern for Bitcoin's long-term decentralizati= on. To
run a competitive mining operation potentially the entire UTXO set must be = in
RAM to achieve competitive latency; your larger, more centralized, competit= ors
will have the UTXO set in RAM. Mining is a zero-sum game, so the extra late= ncy
of not doing so if they do directly impacts your profit margin. Secondly, having possession of the UTXO set is one of the minimum requirements to run= a
full node; the larger the set the harder it is to run a full node.

Currently the maximum size of the UTXO set is unbounded as there is no
consensus rule that limits growth, other than the block-size limit itself; = as
of writing the UTXO set is 1.3GB in the on-disk, compressed serialization,<= br> which expands to significantly more in memory. UTXO growth is driven by a number of factors, including the fact that there is little incentive to mer= ge
inputs, lost coins, dust outputs that can't be economically spent, and<= br> non-btc-value-transfer "blockchain" use-cases such as anti-replay= oracles and
timestamping.

We don't have good tools to combat UTXO growth. Segregated Witness prop= oses to
give witness space a 75% discount, in part of make reducing the UTXO set si= ze
by spending txouts cheaper. While this may change wallets to more often spe= nd
dust, it's hard to imagine an incentive sufficiently strong to discoura= ge most,
let alone all, UTXO growing behavior.

For example, timestamping applications often create unspendable outputs due= to
ease of implementation, and because doing so is an easy way to make sure th= at
the data required to reconstruct the timestamp proof won't get lost - a= ll
Bitcoin full nodes are forced to keep a copy of it. Similarly anti-replay use-cases like using the UTXO set for key rotation piggyback on the uniquel= y
strong security and decentralization guarantee that Bitcoin provides; it= 9;s very
difficult - perhaps impossible - to provide these applications with
alternatives that are equally secure. These non-btc-value-transfer use-case= s
can often afford to pay far higher fees per UTXO created than competing
btc-value-transfer use-cases; many users could afford to spend $50 to regis= ter
a new PGP key, yet would rather not spend $50 in fees to create a standard = two
output transaction. Effective techniques to resist miner censorship exist, = so
without resorting to whitelists blocking non-btc-value-transfer use-cases a= s
"spam" is not a long-term, incentive compatible, solution.

A hard upper limit on UTXO set size could create a more level playing field= in
the form of fixed minimum requirements to run a performant Bitcoin node, an= d
make the issue of UTXO "spam" less important. However, making any= coins
unspendable, regardless of age or value, is a politically untenable economi= c
change.


# TXO Commitments

A merkle tree committing to the state of all transaction outputs, both spen= t
and unspent, we can provide a method of compactly proving the current state= of
an output. This lets us "archive" less frequently accessed parts = of the UTXO
set, allowing full nodes to discard the associated data, still providing a<= br> mechanism to spend those archived outputs by proving to those nodes that th= e
outputs are in fact unspent.

Specifically TXO commitments proposes a Merkle Mountain Range=C2=B9 (MMR), = a
type of deterministic, indexable, insertion ordered merkle tree, which allo= ws
new items to be cheaply appended to the tree with minimal storage requireme= nts,
just log2(n) "mountain tips". Once an output is added to the TXO = MMR it is
never removed; if an output is spent its status is updated in place. Both t= he
state of a specific item in the MMR, as well the validity of changes to ite= ms
in the MMR, can be proven with log2(n) sized proofs consisting of a merkle = path
to the tip of the tree.

At an extreme, with TXO commitments we could even have no UTXO set at all,<= br> entirely eliminating the UTXO growth problem. Transactions would simply be<= br> accompanied by TXO commitment proofs showing that the outputs they wanted t= o
spend were still unspent; nodes could update the state of the TXO MMR purel= y
from TXO commitment proofs. However, the log2(n) bandwidth overhead per txi= n is
substantial, so a more realistic implementation is be to have a UTXO cache = for
recent transactions, with TXO commitments acting as a alternate for the (ra= re)
event that an old txout needs to be spent.

Proofs can be generated and added to transactions without the involvement o= f
the signers, even after the fact; there's no need for the proof itself = to
signed and the proof is not part of the transaction hash. Anyone with acces= s to
TXO MMR data can (re)generate missing proofs, so minimal, if any, changes a= re
required to wallet software to make use of TXO commitments.


## Delayed Commitments

TXO commitments aren't a new idea - the author proposed them years ago = in
response to UTXO commitments. However it's critical for small miners= 9; orphan
rates that block validation be fast, and so far it has proven difficult to<= br> create (U)TXO implementations with acceptable performance; updating and
recalculating cryptographicly hashed merkelized datasets is inherently more=
work than not doing so. Fortunately if we maintain a UTXO set for recent outputs, TXO commitments are only needed when spending old, archived, outpu= ts.
We can take advantage of this by delaying the commitment, allowing it to be=
calculated well in advance of it actually being used, thus changing a
latency-critical task into a much easier average throughput problem.

Concretely each block B_i commits to the TXO set state as of block B_{i-n},= in
other words what the TXO commitment would have been n blocks ago, if not fo= r
the n block delay. Since that commitment only depends on the contents of th= e
blockchain up until block B_{i-n}, the contents of any block after are
irrelevant to the calculation.


## Implementation

Our proposed high-performance/low-latency delayed commitment full-node
implementation needs to store the following data:

1) UTXO set

=C2=A0 =C2=A0 Low-latency K:V map of txouts definitely known to be unspent.= Similar to
=C2=A0 =C2=A0 existing UTXO implementation, but with the key difference tha= t old,
=C2=A0 =C2=A0 unspent, outputs may be pruned from the UTXO set.


2) STXO set

=C2=A0 =C2=A0 Low-latency set of transaction outputs known to have been spe= nt by
=C2=A0 =C2=A0 transactions after the most recent TXO commitment, but create= d prior to the
=C2=A0 =C2=A0 TXO commitment.


3) TXO journal

=C2=A0 =C2=A0 FIFO of outputs that need to be marked as spent in the TXO MM= R. Appends
=C2=A0 =C2=A0 must be low-latency; removals can be high-latency.


4) TXO MMR list

=C2=A0 =C2=A0 Prunable, ordered list of TXO MMR's, mainly the highest p= ending commitment,
=C2=A0 =C2=A0 backed by a reference counted, cryptographically hashed objec= t store
=C2=A0 =C2=A0 indexed by digest (similar to how git repos work). High-laten= cy ok. We'll
=C2=A0 =C2=A0 cover this in more in detail later.


### Fast-Path: Verifying a Txout Spend In a Block

When a transaction output is spent by a transaction in a block we have two<= br> cases:

1) Recently created output

=C2=A0 =C2=A0 Output created after the most recent TXO commitment, so it sh= ould be in the
=C2=A0 =C2=A0 UTXO set; the transaction spending it does not need a TXO com= mitment proof.
=C2=A0 =C2=A0 Remove the output from the UTXO set and append it to the TXO = journal.

2) Archived output

=C2=A0 =C2=A0 Output created prior to the most recent TXO commitment, so th= ere's no
=C2=A0 =C2=A0 guarantee it's in the UTXO set; transaction will have a T= XO commitment
=C2=A0 =C2=A0 proof for the most recent TXO commitment showing that it was = unspent.
=C2=A0 =C2=A0 Check that the output isn't already in the STXO set (doub= le-spent), and if
=C2=A0 =C2=A0 not add it. Append the output and TXO commitment proof to the= TXO journal.

In both cases recording an output as spent requires no more than two key:va= lue
updates, and one journal append. The existing UTXO set requires one key:val= ue
update per spend, so we can expect new block validation latency to be withi= n 2x
of the status quo even in the worst case of 100% archived output spends.

### Slow-Path: Calculating Pending TXO Commitments

In a low-priority background task we flush the TXO journal, recording the outputs spent by each block in the TXO MMR, and hashing MMR data to obtain = the
TXO commitment digest. Additionally this background task removes STXO's= that
have been recorded in TXO commitments, and prunes TXO commitment data no lo= nger
needed.

Throughput for the TXO commitment calculation will be worse than the existi= ng
UTXO only scheme. This impacts bulk verification, e.g. initial block downlo= ad.
That said, TXO commitments provides other possible tradeoffs that can mitig= ate
impact of slower validation throughput, such as skipping validation of old<= br> history, as well as fraud proof approaches.


### TXO MMR Implementation Details

Each TXO MMR state is a modification of the previous one with most informat= ion
shared, so we an space-efficiently store a large number of TXO commitments<= br> states, where each state is a small delta of the previous state, by sharing=
unchanged data between each state; cycles are impossible in merkelized data=
structures, so simple reference counting is sufficient for garbage collecti= on.
Data no longer needed can be pruned by dropping it from the database, and unpruned by adding it again. Since everything is committed to via cryptogra= phic
hash, we're guaranteed that regardless of where we get the data, after<= br> unpruning we'll have the right data.

Let's look at how the TXO MMR works in detail. Consider the following T= XO MMR
with two txouts, which we'll call state #0:

=C2=A0 =C2=A0 =C2=A0 0
=C2=A0 =C2=A0 =C2=A0/ \
=C2=A0 =C2=A0 a=C2=A0 =C2=A0b

If we add another entry we get state #1:

=C2=A0 =C2=A0 =C2=A0 =C2=A0 1
=C2=A0 =C2=A0 =C2=A0 =C2=A0/ \
=C2=A0 =C2=A0 =C2=A0 0=C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0/ \=C2=A0 =C2=A0\
=C2=A0 =C2=A0 a=C2=A0 =C2=A0b=C2=A0 =C2=A0c

Note how it 100% of the state #0 data was reused in commitment #1. Let'= s
add two more entries to get state #2:

=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 2
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/ \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 2=C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/ \=C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0\=C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0\=C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 0=C2=A0 =C2=A0 =C2=A0 =C2=A02=C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0/ \=C2=A0 =C2=A0 =C2=A0/ \=C2=A0 =C2=A0\
=C2=A0 =C2=A0 a=C2=A0 =C2=A0b=C2=A0 =C2=A0c=C2=A0 =C2=A0d=C2=A0 =C2=A0e

This time part of state #1 wasn't reused - it's wasn't a perfec= t binary
tree - but we've still got a lot of re-use.

Now suppose state #2 is committed into the blockchain by the most recent bl= ock.
Future transactions attempting to spend outputs created as of state #2 are<= br> obliged to prove that they are unspent; essentially they're forced to p= rovide
part of the state #2 MMR data. This lets us prune that data, discarding it,=
leaving us with only the bare minimum data we need to append new txouts to = the
TXO MMR, the tips of the perfect binary trees ("mountains") withi= n the MMR:

=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 2
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/ \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 2=C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 e

Note that we're glossing over some nuance here about exactly what data = needs to
be kept; depending on the details of the implementation the only data we ne= ed
for nodes "2" and "e" may be their hash digest.

Adding another three more txouts results in state #3:

=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 3
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/ \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2= =A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0 =C2=A0 =C2= =A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 2=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A03
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0/ \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 /=C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 3=C2=A0 =C2=A0 =C2=A0 =C2=A03
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0/ \=C2=A0 =C2=A0 =C2=A0/ \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 e=C2= =A0 =C2=A0f=C2=A0 =C2=A0g=C2=A0 =C2=A0h

Suppose recently created txout f is spent. We have all the data required to=
update the MMR, giving us state #4. It modifies two inner nodes and one lea= f
node:

=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 4
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/ \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2= =A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0 =C2=A0 =C2= =A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 2=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A04
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0/ \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 /=C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 4=C2=A0 =C2=A0 =C2=A0 =C2=A03
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0/ \=C2=A0 =C2=A0 =C2=A0/ \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 e=C2= =A0 (f)=C2=A0 g=C2=A0 =C2=A0h

If an archived txout is spent requires the transaction to provide the merkl= e
path to the most recently committed TXO, in our case state #2. If txout b i= s
spent that means the transaction must provide the following data from state= #2:

=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 2
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 2
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/
=C2=A0 =C2=A0 =C2=A0 =C2=A0 /
=C2=A0 =C2=A0 =C2=A0 =C2=A0/
=C2=A0 =C2=A0 =C2=A0 0
=C2=A0 =C2=A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 b

We can add that data to our local knowledge of the TXO MMR, unpruning part = of
it:

=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 4
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/ \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2= =A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0 =C2=A0 =C2= =A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 2=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A04
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0/ \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0/=C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 0=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A04=C2=A0 =C2=A0 =C2=A0 =C2=A03
=C2=A0 =C2=A0 =C2=A0 =C2=A0\=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0/ \=C2=A0 =C2=A0 =C2=A0/ \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 b=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0e=C2= =A0 (f)=C2=A0 g=C2=A0 =C2=A0h

Remember, we haven't _modified_ state #4 yet; we just have more data ab= out it.
When we mark txout b as spent we get state #5:

=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 5
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/ \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2= =A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0 =C2=A0 =C2= =A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 5=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A04
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0/ \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0/=C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 5=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A04=C2=A0 =C2=A0 =C2=A0 =C2=A03
=C2=A0 =C2=A0 =C2=A0 =C2=A0\=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0/ \=C2=A0 =C2=A0 =C2=A0/ \
=C2=A0 =C2=A0 =C2=A0 =C2=A0(b)=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 e=C2=A0 (f= )=C2=A0 g=C2=A0 =C2=A0h

Secondly by now state #3 has been committed into the chain, and transaction= s
that want to spend txouts created as of state #3 must provide a TXO proof consisting of state #3 data. The leaf nodes for outputs g and h, and the in= ner
node above them, are part of state #3, so we prune them:

=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 5
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/ \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2= =A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0 =C2=A0 =C2= =A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 5=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A04
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0/
=C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0/
=C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0/
=C2=A0 =C2=A0 =C2=A0 5=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A04
=C2=A0 =C2=A0 =C2=A0 =C2=A0\=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0/ \
=C2=A0 =C2=A0 =C2=A0 =C2=A0(b)=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 e=C2=A0 (f= )

Finally, lets put this all together, by spending txouts a, c, and g, and creating three new txouts i, j, and k. State #3 was the most recently commi= tted
state, so the transactions spending a and g are providing merkle paths up t= o
it. This includes part of the state #2 data:

=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 3
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/ \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2= =A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0 =C2=A0 =C2= =A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 2=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A03
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/ \=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0\=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0\=C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 0=C2=A0 =C2=A0 =C2=A0 =C2=A02=C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A03
=C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/
=C2=A0 =C2=A0 a=C2=A0 =C2=A0 =C2=A0 =C2=A0c=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0g

After unpruning we have the following data for state #5:

=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 5
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/ \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2= =A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0 =C2=A0 =C2= =A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 5=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A04
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/ \=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0/ \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0\=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0/=C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0\=C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 5=C2=A0 =C2=A0 =C2=A0 =C2=A02=C2=A0 =C2=A0 =C2=A0 =C2= =A04=C2=A0 =C2=A0 =C2=A0 =C2=A03
=C2=A0 =C2=A0 =C2=A0/ \=C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0 =C2=A0/ \= =C2=A0 =C2=A0 =C2=A0/
=C2=A0 =C2=A0 a=C2=A0 (b)=C2=A0 c=C2=A0 =C2=A0 =C2=A0 =C2=A0e=C2=A0 (f)=C2= =A0 g

That's sufficient to mark the three outputs as spent and add the three = new
txouts, resulting in state #6:

=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 6
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0/ \
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 /=C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0/=C2=A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2= =A0 =C2=A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 6=C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/ \=C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0\=C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2= =A0\=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0 =C2=A0 =C2= =A0\=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0\=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0\=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0\=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 6=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A06=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/ \=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0/ \=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0\
=C2=A0 =C2=A0 =C2=A0 =C2=A0 /=C2=A0 =C2=A0\=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0/=C2=A0 =C2=A0\=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A06
=C2=A0 =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0\=C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0\=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/ \
=C2=A0 =C2=A0 =C2=A0 6=C2=A0 =C2=A0 =C2=A0 =C2=A06=C2=A0 =C2=A0 =C2=A0 =C2= =A04=C2=A0 =C2=A0 =C2=A0 =C2=A06=C2=A0 =C2=A0 =C2=A0 =C2=A06=C2=A0 =C2=A0\<= br> =C2=A0 =C2=A0 =C2=A0/ \=C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0 =C2=A0/ \= =C2=A0 =C2=A0 =C2=A0/=C2=A0 =C2=A0 =C2=A0 =C2=A0/ \=C2=A0 =C2=A0\
=C2=A0 =C2=A0(a) (b) (c)=C2=A0 =C2=A0 =C2=A0 e=C2=A0 (f) (g)=C2=A0 =C2=A0 = =C2=A0 i=C2=A0 =C2=A0j=C2=A0 =C2=A0k

Again, state #4 related data can be pruned. In addition, depending on how t= he
STXO set is implemented may also be able to prune data related to spent txo= uts
after that state, including inner nodes where all txouts under them have be= en
spent (more on pruning spent inner nodes later).


### Consensus and Pruning

It's important to note that pruning behavior is consensus critical: a f= ull node
that is missing data due to pruning it too soon will fall out of consensus,= and
a miner that fails to include a merkle proof that is required by the consen= sus
is creating an invalid block. At the same time many full nodes will have significantly more data on hand than the bare minimum so they can help wall= ets
make transactions spending old coins; implementations should strongly consi= der
separating the data that is, and isn't, strictly required for consensus= .

A reasonable approach for the low-level cryptography may be to actually tre= at
the two cases differently, with the TXO commitments committing too what dat= a
does and does not need to be kept on hand by the UTXO expiration rules. On = the
other hand, leaving that uncommitted allows for certain types of soft-forks=
where the protocol is changed to require more data than it previously did.<= br>

### Consensus Critical Storage Overheads

Only the UTXO and STXO sets need to be kept on fast random access storage.<= br> Since STXO set entries can only be created by spending a UTXO - and are sma= ller
than a UTXO entry - we can guarantee that the peak size of the UTXO and STX= O
sets combined will always be less than the peak size of the UTXO set alone = in
the existing UTXO-only scheme (though the combined size can be temporarily<= br> higher than what the UTXO set size alone would be when large numbers of
archived txouts are spent).

TXO journal entries and unpruned entries in the TXO MMR have log2(n) maximu= m
overhead per entry: a unique merkle path to a TXO commitment (by "uniq= ue" we
mean that no other entry shares data with it). On a reasonably fast system = the
TXO journal will be flushed quickly, converting it into TXO MMR data; the T= XO
journal will never be more than a few blocks in size.

Transactions spending non-archived txouts are not required to provide any T= XO
commitment data; we must have that data on hand in the form of one TXO MMR<= br> entry per UTXO. Once spent however the TXO MMR leaf node associated with th= at
non-archived txout can be immediately pruned - it's no longer in the UT= XO set
so any attempt to spend it will fail; the data is now immutable and we'= ll never
need it again. Inner nodes in the TXO MMR can also be pruned if all leafs u= nder
them are fully spent; detecting this is easy the TXO MMR is a merkle-sum tr= ee,
with each inner node committing to the sum of the unspent txouts under it.<= br>
When a archived txout is spent the transaction is required to provide a mer= kle
path to the most recent TXO commitment. As shown above that path is suffici= ent
information to unprune the necessary nodes in the TXO MMR and apply the spe= nd
immediately, reducing this case to the TXO journal size question (non-conse= nsus
critical overhead is a different question, which we'll address in the n= ext
section).

Taking all this into account the only significant storage overhead of our T= XO
commitments scheme when compared to the status quo is the log2(n) merkle pa= th
overhead; as long as less than 1/log2(n) of the UTXO set is active,
non-archived, UTXO's we've come out ahead, even in the unrealistic = case where
all storage available is equally fast. In the real world that isn't yet= the
case - even SSD's significantly slower than RAM.


### Non-Consensus Critical Storage Overheads

Transactions spending archived txouts pose two challenges:

1) Obtaining up-to-date TXO commitment proofs

2) Updating those proofs as blocks are mined

The first challenge can be handled by specialized archival nodes, not unlik= e
how some nodes make transaction data available to wallets via bloom filters= or
the Electrum protocol. There's a whole variety of options available, an= d the
the data can be easily sharded to scale horizontally; the data is
self-validating allowing horizontal scaling without trust.

While miners and relay nodes don't need to be concerned about the initi= al
commitment proof, updating that proof is another matter. If a node aggressi= vely
prunes old versions of the TXO MMR as it calculates pending TXO commitments= , it
won't have the data available to update the TXO commitment proof to be = against
the next block, when that block is found; the child nodes of the TXO MMR ti= p
are guaranteed to have changed, yet aggressive pruning would have discarded= that
data.

Relay nodes could ignore this problem if they simply accept the fact that they'll only be able to fully relay the transaction once, when it is in= itially
broadcast, and won't be able to provide mempool functionality after the= initial
relay. Modulo high-latency mixnets, this is probably acceptable; the author= has
previously argued that relay nodes don't need a mempool=C2=B2 at all.
For a miner though not having the data necessary to update the proofs as bl= ocks
are found means potentially losing out on transactions fees. So how much ex= tra
data is necessary to make this a non-issue?

Since the TXO MMR is insertion ordered, spending a non-archived txout can o= nly
invalidate the upper nodes in of the archived txout's TXO MMR proof (if= this
isn't clear, imagine a two-level scheme, with a per-block TXO MMRs, com= mitted
by a master MMR for all blocks). The maximum number of relevant inner nodes=
changed is log2(n) per block, so if there are n non-archival blocks between= the
most recent TXO commitment and the pending TXO MMR tip, we have to store log2(n)*n inner nodes - on the order of a few dozen MB even when n is a
(seemingly ridiculously high) year worth of blocks.

Archived txout spends on the other hand can invalidate TXO MMR proofs at an= y
level - consider the case of two adjacent txouts being spent. To guarantee<= br> success requires storing full proofs. However, they're limited by the b= locksize
limit, and additionally are expected to be relatively uncommon. For example= , if
1% of 1MB blocks was archival spends, our hypothetical year long TXO commit= ment
delay is only a few hundred MB of data with low-IO-performance requirements= .


## Security Model

Of course, a TXO commitment delay of a year sounds ridiculous. Even the slo= west
imaginable computer isn't going to need more than a few blocks of TXO commitment delay to keep up ~100% of the time, and there's no reason wh= y we
can't have the UTXO archive delay be significantly longer than the TXO<= br> commitment delay.

However, as with UTXO commitments, TXO commitments raise issues with Bitcoi= n's
security model by allowing relatively miners to profitably mine transaction= s
without bothering to validate prior history. At the extreme, if there was n= o
commitment delay at all at the cost of a bit of some extra network bandwidt= h
"full" nodes could operate and even mine blocks completely statel= essly by
expecting all transactions to include "proof" that their inputs a= re unspent; a
TXO commitment proof for a commitment you haven't verified isn't a = proof that a
transaction output is unspent, it's a proof that some miners claimed th= e txout
was unspent.

At one extreme, we could simply implement TXO commitments in a "virtua= l"
fashion, without miners actually including the TXO commitment digest in the= ir
blocks at all. Full nodes would be forced to compute the commitment from scratch, in the same way they are forced to compute the UTXO state, or tota= l
work. Of course a full node operator who doesn't want to verify old his= tory can
get a copy of the TXO state from a trusted source - no different from how y= ou
could get a copy of the UTXO set from a trusted source.

A more pragmatic approach is to accept that people will do that anyway, and=
instead assume that sufficiently old blocks are valid. But how old is
"sufficiently old"? First of all, if your full node implementatio= n comes "from
the factory" with a reasonably up-to-date minimum accepted total-work<= br> threshold=E2=81=B1 - in other words it won't accept a chain with less t= han that amount
of total work - it may be reasonable to assume any Sybil attacker with
sufficient hashing power to make a forked chain meeting that threshold with= ,
say, six months worth of blocks has enough hashing power to threaten the ma= in
chain as well.

That leaves public attempts to falsify TXO commitments, done out in the ope= n by
the majority of hashing power. In this circumstance the "assumed valid= "
threshold determines how long the attack would have to go on before full no= des
start accepting the invalid chain, or at least, newly installed/recently re= set
full nodes. The minimum age that we can "assume valid" is tradeof= f between
political/social/technical concerns; we probably want at least a few weeks = to
guarantee the defenders a chance to organise themselves.

With this in mind, a longer-than-technically-necessary TXO commitment delay= =CA=B2
may help ensure that full node software actually validates some minimum num= ber
of blocks out-of-the-box, without taking shortcuts. However this can be
achieved in a wide variety of ways, such as the author's prev-block-pro= of
proposal=C2=B3, fraud proofs, or even a PoW with an inner loop dependent on=
blockchain data. Like UTXO commitments, TXO commitments are also potentiall= y
very useful in reducing the need for SPV wallet software to trust third par= ties
providing them with transaction data.

i) Checkpoints that reject any chain without a specific block are a more =C2=A0 =C2=A0common, if uglier, way of achieving this protection.

j) A good homework problem is to figure out how the TXO commitment could be=
=C2=A0 =C2=A0designed such that the delay could be reduced in a soft-fork.<= br>

## Further Work

While we've shown that TXO commitments certainly could be implemented w= ithout
increasing peak IO bandwidth/block validation latency significantly with th= e
delayed commitment approach, we're far from being certain that they sho= uld be
implemented this way (or at all).

1) Can a TXO commitment scheme be optimized sufficiently to be used directl= y
without a commitment delay? Obviously it'd be preferable to avoid all t= he above
complexity entirely.

2) Is it possible to use a metric other than age, e.g. priority? While this=
complicates the pruning logic, it could use the UTXO set space more
efficiently, especially if your goal is to prioritise bitcoin value-transfe= r
over other uses (though if "normal" wallets nearly never need to = use TXO
commitments proofs to spend outputs, the infrastructure to actually do this= may
rot).

3) Should UTXO archiving be based on a fixed size UTXO set, rather than an<= br> age/priority/etc. threshold?

4) By fixing the problem (or possibly just "fixing" the problem) = are we
encouraging/legitimising blockchain use-cases other than BTC value transfer= ?
Should we?

5) Instead of TXO commitment proofs counting towards the blocksize limit, c= an
we use a different miner fairness/decentralization metric/incentive? For instance it might be reasonable for the TXO commitment proof size to be
discounted, or ignored entirely, if a proof-of-propagation scheme (e.g.
thinblocks) is used to ensure all miners have received the proof in advance= .

6) How does this interact with fraud proofs? Obviously furthering dependenc= y on
non-cryptographically-committed STXO/UTXO databases is incompatible with th= e
modularized validation approach to implementing fraud proofs.


# References

1) "Merkle Mountain Ranges",
=C2=A0 =C2=A0Peter Todd, OpenTimestamps, Mar 18 2013,
=C2=A0 =C2=A0https://github.com/opentimestamps/opentimestamps-server/blob/maste= r/doc/merkle-mountain-range.md

2) "Do we really need a mempool? (for relay nodes)",
=C2=A0 =C2=A0Peter Todd, bitcoin-dev mailing list, Jul 18th 2015,
=C2=A0 =C2=A0https://li= sts.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009479.html

3) "Segregated witnesses and validationless mining",
=C2=A0 =C2=A0Peter Todd, bitcoin-dev mailing list, Dec 23rd 2015,
=C2=A0 =C2=A0https:= //lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012103.html=

--
http= s://petertodd.org 'peter'[:-1]@petertodd.org

_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.= linuxfoundation.org
https://lists.linuxfoundation.org/mail= man/listinfo/bitcoin-dev


--001a11c3fd1240a58205330a34ee--