> The part of Revault we are interested in for this study is the delegation process, and more> specifically the application of spending policies by network monitors (watchtowers).I'd like to better understand how fee-bumping would be used, i.e. how the watchtower model works:- Do all of the vault parties both deposit to the vault and a refill/fee to the watchtower, is there a reward the watchtower collects for a successful Cancel, or something else? (Apologies if there's a thorough explanation somewhere that I haven't already seen).- Do we expect watchtowers tracking multiple vaults to be batching multiple Cancel transaction fee-bumps?- Do we expect vault users to be using multiple watchtowers for a better trust model? If so, and we're expecting batched fee-bumps, won't those conflict?
> For Revault we can afford to introduce malleability in the Cancel transaction since there is no> second-stage transaction depending on its txid. Therefore it is pre-signed with ANYONECANPAY. We> can't use ANYONECANPAY|SINGLE since it would open a pinning vector [3]. Note how we can't leverage> the carve out rule, and neither can any other more-than-two-parties contract.We've already talked about this offline, but I'd like to point out here that even transactions signed with ANYONECANPAY|ALL can be pinned by RBF unless we add an ancestor score rule. [0], [1] (numbers are inaccurate, Cancel Tx feerates wouldn't be that low, but just to illustrate what the attack would look like)
> can't use ANYONECANPAY|SINGLE since it would open a pinning vector [3]. Note how we can't leverage> the carve out rule, and neither can any other more-than-two-parties contract.Well stated about CPFP carve out. I suppose the generalization is that allowing n extra ancestorcount=2 descendants to a transaction means it can help contracts with <=n+1 parties (more accurately, outputs)? I wonder if it's possible to devise a different approach for limiting ancestors/descendants, e.g. by height/width/branching factor of the family instead of count... :shrug:
IIUC, a Cancel transaction can be generalized as a 1-in-1-out where the input is presigned with counterparties, SIGHASH_ANYONECANPAY. The fan-out UTXO pool approach is a clever solution. I also think this smells like a case where improving lower-level RBF rules is more appropriate than requiring applications to write workarounds and generate extra transactions. Seeing that the BIP125#2 (no new unconfirmed inputs) restriction really hurts in this case, if that rule were removed, would you be able to simply keep the 1 big UTXO per vault and cut out the exact nValue you need to fee-bump Cancel transactions? Would that feel less like "burning" for the sake of fee-bumping?
> First of all, when to fee-bump? At fixed time intervals? At each block connection? It sounds like,> given a large enough timelock, you could try to greed by "trying your luck" at a lower feerate and> only re-bumping every N blocks. You would then start aggressively bumping at every block after M> blocks have passed.I'm wondering if you also considered other questions like:- Should a fee-bumping strategy be dependent upon the rate of incoming transactions? To me, it seems like the two components are (1) what's in the mempool and (2) what's going to trickle into the mempool between now and the target block. The first component is best-effort keeping incentive-compatible mempool; historical data and crystal ball look like the only options for incorporating the 2nd component.- Should the fee-bumping strategy depend on how close you are to your timelock expiry? (though this seems like a potential privacy leak, and the game theory could get weird as you mentioned).- As long as you have a good fee estimator (i.e. given a current mempool, can get an accurate feerate given a % probability of getting into target block n), is there any reason to devise a fee-bumping strategy beyond picking a time interval?
It would be interesting to see stats on the spread of feerates in blocks during periods of fee fluctuation.> > In the event that you notice a consequent portion of the block is filled with transactions paying> > less than your own, you might want to start panicking and bump your transaction fees by a certain> > percentage with no consideration for your fee estimator. You might skew miners incentives in doing> > so: if you increase the fees by a factor of N, any miner with a fraction larger than 1/N of the> > network hashrate now has an incentive to censor your transaction at first to get you to panic.> Yes I think miner-harvesting attacks should be weighed carefully in the design of offchain contracts fee-bumping strategies, at least in the future when the mining reward exhausts further.Miner-harvesting (such cool naming!) is interesting, but I want to clarify the value of N - I don't think it's the factor by which you increase the fees on just your transaction.To codify: your transaction pays a fee of `f1` right now and might pay a fee of `f2` in a later block that the miner expects to mine with 1/N probability. The economically rational miner isn't incentivized if simply `f2 = N * f1` unless their mempool is otherwise empty.By omitting your transaction in this block, the miner can include another transaction/package paying `g1` fees instead, so they lose `f1-g1` in fees right now. In the future block, they have the choice between collecting `f2` or `g2` (from another transaction/package) in fees, so their gain is `max(f2-g2, 0)`.So the equation is more like: a miner with 1/N of the hashrate, employing this censorship strategy, gains only if `max(f2-g2, 0) > N * (f1-g1)`. More broadly, the miner only profits if `f2` is significantly higher than `g2` and `f1` is about the same feerate as everything else in your mempool: it seems like they're betting on how much you _overshoot_, not how much you bump.
Slightly related question: in contracts, generally, the timelock deadline is revealed in the script, so the miner knows how "desperate" we are right?
Is that a problem?
For Revault, if your Cancel transaction is a keypath spend (I think I remember reading that somewhere?) and you don't reveal the script, they don't see your timelock deadline yes?
Again, thanks for the digging and sharing. :)
Best,GloriaOn Tue, Nov 30, 2021 at 3:27 PM darosior via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:Hi Antoine,Thanks for your comment. I believe for Lightning it's simpler with regard to the management of the UTxO pool, but harder with regard to choosinga threat model.Responses inline.For any opened channel, ensure the confirmation of a Commitment transaction and the children HTLC-Success/HTLC-Timeout transactions. Note, in the Lightning security game you have to consider (at least) 4 types of players moves and incentives : your node, your channel counterparties, the miners, the crowd of bitcoin users. The number of the last type of players is unknown from your node, however it should not be forgotten you're in competition for block space, therefore their block demands bids should be anticipated and reacted to in consequence. With that remark in mind, implications for your LN fee-bumping strategy will be raised afterwards.For a LN service provider, on-chain overpayments are bearing on your operational costs, thus downgrading your economic competitiveness. For the average LN user, overpayment might price out outside a LN non-custodial deployment, as you don't have the minimal security budget to be on your own.I think this problem statement can be easily generalised to any offchain contract. And your points stand for all of them."For any opened contract, ensure at any point the confirmation of a (set of) transaction(s) in a given number of blocks"Same issue with Lightning, we can be pinned today on the basis of replace-by-fee rule 3. We can be also blinded by network mempool partitions, a pinning counterparty can segregate all the full-nodes in as many subsets by broadcasting a revoked Commitment transaction different for each. For Revault, I think you can also do unlimited partitions by mutating the ANYONECANPAY-input of the Cancel.Well you can already do unlimited partitions by adding different inputs to it. You could malleate the witness, but since we are using Miniscript i'm confident you would only be able in a marginal way.That said, if you have a distributed towers deployment, spread across the p2p network topology, and they can't be clustered together through cross-layers or intra-layer heuristics, you should be able to reliably observe such partitions. I think such distributed monitors are deployed by few L1 merchants accepting 0-conf to detect naive double-spend.We should aim to more than 0-conf (in)security level..It seems to me the only policy-level mitigation for RBF pinning around the "don't decrease the abolute fees of a less-than-a-block mempool" would be to drop the requirement on increasing absolute fees if the mempool is "full enough" (and the feerate increases exponentially, of course).Another approach could be by introducing new consensus rules as proposed by Jeremy last year [0]. If we go in the realm of new consensus rules, then i think that simply committing to a maximum tx size would fix pinning by RBF rule 3. Could be in the annex, or in the unused sequence bits (although they currently are by Lightning, meh). You could also check in the output script that the input commits to this.Have we already discussed a fee-bumping "shared cache", a CPFP variation ? Strawman idea: Alice and Bob commit collateral inputs to a separate UTXO from the main "offchain contract" one. This UTXO is locked by a multi-sig. For any Commitment transaction pre-signed, also counter-sign a CPFP with top mempool feerate included, spending a Commitment anchor output and the shared-cache UTXO. If the fees spike, you can re-sign a high-feerate CPFP, assuming interactivity. As the CPFP is counter-signed by everyone, the outputs can be CSV-1 encumbered to prevent pinnings. If the share-cache is feeded at parity, there shouldn't be an incentive to waste or maliciously inflate the feerate. I think this solution can be easily generalized to more than 2 counterparties by using a multi-signature scheme. Big issue, if the feerate is short due to fee spikes and you need to re-sign a higher-feerate CPFP, you're trusting your counterparty to interact, though arguably not worse than the current update fee mechanism.It really looks just like `update_fee`. Except maybe with the property that you have the channel liquidity not depend on the onchain feerate.In any case, for Lightning i think it's a bad idea to re-introduce trust on this side post anchor outputs. For Revault it's clearly out of the question to introduce trust in your counterparties (why would you bother having a fee-bumping mechanism in the first place then?). Probably the same holds for all offchain contracts.> For Lightning, it'd mean keeping an equivalent amount of funds as the sum of all yourchannels balances sitting there unallocated "just in case". This is not reasonable.Agree, game-theory wise, you would like to keep a full fee-bumping reserve, ready to burn as much in fees as the contested HTLC value, as it's the maximum gain of your counterparty. Though perfect equilibrium is hard to achieve because your malicious counterparty might have an edge pushing you to broadcast your Commitment first by witholding HTLC resolution.Fractional fee-bumping reserves are much more realistic to expect in the LN network. Lower fee-bumping reserve, higher liquidity deployed, in theory higher routing fees. By observing historical feerates, average offchain balances at risk and routing fees expected gains, you should be able to discover an equilibrium where higher levels of reserve aren't worth the opportunity cost. I guess this equilibrium could be your LN fee-bumping reserve max feerate.Note, I think the LN approach is a bit different from what suits a custody protocol like Revault, as you compute a direct return of the frozen fee-bumping liquidity. With Revault, if you have numerous bitcoins protected, it's might be more interesting to adopt a "buy the mempool, stupid" strategy than risking fund safety for few percentages of interest returns.True for routing nodes. For wallets (if receiving funds), it's not about an investment: just users expectations to being able to transact without risking to lose their funds (ie being able to enforce their contract onchain). Although wallets they are much less at risk.This is where the "anticipate the crowd of bitcoin users move" point can be laid out. As the crowd of bitcoin users' fee-bumping reserves are ultimately unknown from your node knowledge, you should be ready to be a bit more conservative than the vanilla fee-bumping strategies shipped by default. In case of massive mempool congestion, your additional conservatism might get your time-sensitive transactions and game on the crowd of bitcoin users. First Problem: if all offchain bitcoin software adopt that strategy we might inflate the worst-case feerate rate at the benefit of the miners, without holistically improving block throughput. Second problem : your class of offchain bitcoin softwares might have ridiculous fee-bumping reserve comparedto other classes of offchain bitcoin softwares (Revault > Lightning) and just be priced out bydesign in case of mempool congestion. Third problem : as the number of offchain bitcoin applications should go up with time, your fee-bumping reserve levels based from historical data might be always late by one "bank-run" scenario.Black swan event 2.0? Just rule n°3 is inherent to any kind of fee estimation.For Lightning, if you're short in fee-bumping reserves you might still do preemptive channel closures, either cooperatively or unilaterally and get back the off-chain liquidity to protect the more economically interesting channels. Though again, that kind of automatic behavior might be compelling at the individual node-level, but make the mempol congestion worse holistically.Yeah so we are back to the "fractional reserve" model: you can only enforce X% of the offchain contracts your participate in.. Actually it's even an added assumption: that you still have operating contracts, with honest counterparties.In case of massive mempool congestion, you might try to front-run the crowd of bitcoin users relying on block connections for fee-bumping, and thus start your fee-bumping as soon as you observe feerate groups fluctuations in your local mempool(s).I don't think any kind of mempool-based estimate generalizes well, since at any point the expected time before the next block is 10 minutes (and a lot can happen in 10min).Also you might proceed your fee-bumping ticks on a local clock instead of block connections in case of time-dilation or deeper eclipse attacks of your local node. Your view of the chain might be compromised but not your ability to broadcast transactions thanks to emergency channels (in the non-LN sense...though in fact quid of txn wrapped in onions ?) of communication.Oh, yeah, i didn't explicit "not getting eclipsed" (or more generally "data availability") as an assumption since it's generally one made by participants of any offchain contract. In this case you can't even have decent fee estimation, so you are screwed anyways.Yes, stay open the question on how you enforce this block insurance market. Reputation, which might be to avoid due to the latent centralization effect, might be hard to stack and audit reliably for an emergency mechanism running, hopefully, once in a halvening period. Maybe maybe some cryptographic or economically based mechanism on slashing or swaps could be found...Unfortunately, given current mining centralisation, pools are in a very good position to offer pretty decent SLAs around that. With a block space insurance, you of course don't need all these convoluted fee-bumping hacks.I'm very concerned that large stakeholders of the "offchain contracts ecosystem" would just go this (easier) way and further increase mining centralisation pressure.I agree that a cryptography-based scheme around this type of insurance services would be the best way out.AntoineLe lun. 29 nov. 2021 à 09:34, darosior via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> a écrit :Hi everyone,Fee-bumping is paramount to the security of many protocols building on Bitcoin, as they require theconfirmation of a transaction (which might be presigned) before the expiration of a timelock at anypoint after the establishment of the contract.The part of Revault using presigned transactions (the delegation from a large to a smaller multisig)is no exception. We have been working on how to approach this for a while now and i'd like to sharewhat we have in order to open a discussion on this problem so central to what seem to be The RightWay [0] to build on Bitcoin but which has yet to be discussed in details (at least publicly).I'll discuss what we came up with for Revault (at least for what will be its first iteration) but myintent with posting to the mailing list is more to frame the questions to this problem we are allgoing to face rather than present the results of our study tailored to the Revault usecase.The discussion is still pretty Revault-centric (as it's the case study) but hopefully this can helpfuture protocol designers and/or start a discussion around what everyone's doing for existing ones.## 1. Reminder about RevaultThe part of Revault we are interested in for this study is the delegation process, and morespecifically the application of spending policies by network monitors (watchtowers).Coins are received on a large multisig. Participants of this large multisig create 2 [1]transactions. The Unvault, spending a deposit UTxO, creates an output paying either to the smallmultisig after a timelock or to the large multisig immediately. The Cancel, spending the Unvaultoutput through the non-timelocked path, creates a new deposit UTxO.Participants regularly exchange the Cancel transaction signatures for each deposit, sharing thesignatures with the watchtowers they operate. They then optionally [2] sign the Unvault transactionand share the signatures with the small multisig participants who can in turn use them to proceedwith a spending. Watchtowers can enforce spending policies (say, can't Unvault outside of businesshours) by having the Cancel transaction be confirmed before the expiration of the timelock.## 2. Problem statementFor any delegated vault, ensure the confirmation of a Cancel transaction in a configured number ofblocks at any point. In so doing, minimize the overpayments and the UTxO set footprint. Overpaymentsincrease the burden on the watchtower operator by increasing the required frequency of refills of thefee-bumping wallet, which is already the worst user experience. You are likely to manage a number ofUTxOs with your number of vaults, which comes at a cost for you as well as everyone running a fullnode.Note that this assumes miners are economically rationale, are incentivized by *public* fees and thatyou have a way to propagate your fee-bumped transaction to them. We also don't consider the blockspace bounds.In the previous paragraph and the following text, "vault" can generally be replaced with "offchaincontract".## 3. With presigned transactionsAs you all know, the first difficulty is to get to be able to unilaterally enforce your contractonchain. That is, any participant must be able to unilaterally bump the fees of a transaction evenif it was co-signed by other participants.For Revault we can afford to introduce malleability in the Cancel transaction since there is nosecond-stage transaction depending on its txid. Therefore it is pre-signed with ANYONECANPAY. Wecan't use ANYONECANPAY|SINGLE since it would open a pinning vector [3]. Note how we can't leveragethe carve out rule, and neither can any other more-than-two-parties contract.This has a significant implication for the rest, as we are entirely burning fee-bumping UTxOs.This opens up a pinning vector, or at least a significant nuisance: any other party can largelyincrease the absolute fee without increasing the feerate, leveraging the RBF rules to prevent youfrom replacing it without paying an insane fee. And you might not see it in your own mempool andcould only suppose it's happening by receiving non-full blocks or with transactions paying a lowerfeerate.Unfortunately i know of no other primitive that can be used by multi-party (i mean, >2) presignedtransactions protocols for fee-bumping that aren't (more) vulnerable to pinning.## 4. We are still betting on future feerateThe problem is still missing one more constraint. "Ensuring confirmation at any time" involves ensuringconfirmation at *any* feerate, which you *cannot* do. So what's the limit? In theory you should be readyto burn as much in fees as the value of the funds you want to get out of the contract. So... For usit'd mean keeping for each vault an equivalent amount of funds sitting there on the watchtower's hotwallet. For Lightning, it'd mean keeping an equivalent amount of funds as the sum of all yourchannels balances sitting there unallocated "just in case". This is not reasonable.So you need to keep a maximum feerate, above which you won't be able to ensure the enforcement ofall your contracts onchain at the same time. We call that the "reserve feerate" and you can havedifferent strategies for choosing it, for instance:- The 85th percentile over the last year of transactions feerates- The maximum historical feerate- The maximum historical feerate adjusted in dollars (makes more sense but introduces a (set of?)trusted oracle(s) in a security-critical component)- Picking a random high feerate (why not? It's an arbitrary assumption anyways)Therefore, even if we don't have to bet on the broadcast-time feerate market at signing time anymore(since we can unilaterally bump), we still need some kind of prediction in preparation of makingfunds available to bump the fees at broadcast time.Apart from judging that 500sat/vb is probably more reasonable than 10sat/vbyte, this unfortunatelysounds pretty much crystal-ball-driven.We currently use the maximum of the 95th percentiles over 90-days windows over historical block chainfeerates. [4]## 5. How much funds does my watchtower need?That's what we call the "reserve". Depending on your reserve feerate strategy it might vary overtime. This is easier to reason about with a per-contract reserve. For Revault it's prettystraightforward since the Cancel transaction size is static: `reserve_feerate * cancel_size`. Forother protocols with dynamic transaction sizes (or even packages of transactions) it's less so. Foryour Lightning channel you would probably take the maximum size of your commitment transactionaccording to your HTLC exposure settings + the size of as many `htlc_success` transaction?Then you either have your software or your user guesstimate how many offchain contracts thewatchtower will have to watch, time that by the per-contract reserve and refill this amount (plussome slack in practice). Once again, a UX tradeoff (not even mentioning the guesstimation UX):overestimating leads to too many unallocated funds sitting on a hot wallet, underestimating means(at best) inability to participate in new contracts or being "at risk" (not being able to enforceall your contracts onchain at your reserve feerate) before a new refill.For vaults you likely have large-value UTxOs and small transactions (the Cancel is one-in one-out inRevault). For some other applications with large transactions and lower-value UTxOs on average it'slikely that only part of the offchain contracts might be enforceable at a reasonable feerate. Is itreasonable?## 6. UTxO pool layoutNow that you somehow managed to settle on a refill amount, how are you going to use these funds?Also, you'll need to manage your pool across time (consolidating small coins, and probably fanningout large ones).You could keep a single large UTxO and peel it as you need to sponsor transactions. But this meansthat you need to create a coin of a specific value according to your need at the current feerateestimation, hope to have it confirmed in a few blocks (at least for now! [5]), and hope that thevalue won't be obsolete by the time it confirmed. Also, you'd have to do that for any number ofCancel, chaining feebump coin creation transactions off the change of the previous ones or replacingthem with more outputs. Both seem to become really un-manageable (and expensive) in many edge-cases,shortening the time you have to confirm the actual Cancel transaction and creating uncertainty aboutthe reserve (how much is my just-in-time fanout going to cost me in fees that i need to refill inadvance on my watchtower wallet?).This is less of a concern for protocols using CPFP to sponsor transactions, but they rely on apolicy rule specific to 2-parties contracts.Therefore for Revault we fan-out the coins per-vault in advance. We do so at refill time so therefiller can give an excess to pay for the fees of the fanout transaction (which is reasonable sinceit will occur just after the refilling transaction confirms). When the watchtower is asked to watchfor a new delegated vault it will allocate coins from the pool of fanned-out UTxOs to it (failingthat, it would refuse the delegation).What is a good distribution of UTxOs amounts per vault? We want to minimize the number of coins,still have coins small enough to not overpay (remember, we can't have change) and be able to bump aCancel up to the reserve feerate using these coins. The two latter constraints are directly incontradiction as the minimal value of a coin usable at the reserve feerate (paying for its own inputfee + bumping the feerate by, say, 5sat/vb) is already pretty high. Therefore we decided to go withtwo distributions per vault. The "reserve distribution" alone ensures that we can bump up to thereserve feerate and is usable for high feerates. The "bonus distribution" is not, but containssmaller coins useful to prevent overpayments during low and medium fee periods (which is most of thetime).Both distributions are based on a basic geometric suite [6]. Each value is half the previous one.This exponentially decreases the value, limiting the number of coins. But this also allows forpretty small coins to exist and each coin's value is equal to the sum of the smaller coins,or smaller by at most the value of the smallest coin. Therefore bounding the maximum overpayment tothe smallest coin's value [7].For the management of the UTxO pool across time we merged the consolidation with the fanout. Whenfanning out a refilled UTxO, we scan the pool for coins that need to be consolidated according to aheuristic. An instance of a heuristic is "the coin isn't allocated and would not have been able toincrease the fee at the median feerate over the past 90 days of blocks".We had this assumption that feerate would tend to go up with time and therefore discarded having tosplit some UTxOs from the pool. We however overlooked that a large increase in the exchange price ofBTC as we've seen during the past year could invalidate this assumption and that should arguably bereconsidered.## 7. Bumping and re-bumpingFirst of all, when to fee-bump? At fixed time intervals? At each block connection? It sounds like,given a large enough timelock, you could try to greed by "trying your luck" at a lower feerate andonly re-bumping every N blocks. You would then start aggressively bumping at every block after Mblocks have passed. But that's actually a bet (in disguised?) that the next block feerate in M blockswill be lower than the current one. In the absence of any predictive model it is more reasonable tojust start being aggressive immediately.You probably want to base your estimates on `estimatesmartfee` and as a consequence you would re-bump(if needed )after each block connection, when your estimates get updated and you notice yourtransaction was not included in the block.In the event that you notice a consequent portion of the block is filled with transactions payingless than your own, you might want to start panicking and bump your transaction fees by a certainpercentage with no consideration for your fee estimator. You might skew miners incentives in doingso: if you increase the fees by a factor of N, any miner with a fraction larger than 1/N of thenetwork hashrate now has an incentive to censor your transaction at first to get you to panic. Alsonote this can happen if you want to pay the absolute fees for the 'pinning' attack mentioned insection #2, and that might actually incentivize miners to perform it themselves..The gist is that the most effective way to bump and rebump (RBF the Cancel tx) seems to just be toconsider the `estimatesmartfee 2 CONSERVATIVE` feerate at every block your tx isn't included in, andto RBF it if the feerate is higher.In addition, we fallback to a block chain based estimation when estimates aren't available (eg ifthe user stopped their WT for say a hour and we come back up): we use the 85th percentile over thefeerates in the last 6 blocks. Sure, miners can try to have an influence on that by stuffing theirblocks with large fee self-paying transactions, but they would need to:1. Be sure to catch a significant portion of the 6 blocks (at least 2, actually)2. Give up on 25% of the highest fee-paying transactions (assuming they got the 6 blocks, it'sproportionally larger and incertain as they get less of them)3. Hope that our estimator will fail and we need to fall back to the chain-based estimation## 8. Our studyWe essentially replayed the historical data with different deployment configurations (number ofparticipants and timelock) and probability of an event occurring (event being say an Unvault, aninvalid Unvault, a new delegation, ..). We then observed different metrics such as the time at risk(when we can't enforce all our contracts at the reserve feerate at the same time), or theoperational cost.We got the historical fee estimates data from Statoshi [9], Txstats [10] and the historical chaindata from Riccardo Casatta's `blocks_iterator` [11]. Thanks!The (research-quality..) code can be found at https://github.com/revault/research under the section"Fee bumping". Again it's very Revault specific, but at least the data can probably be reused forstudying other protocols.## 9. InsurancesOf course, given it's all hacks and workarounds and there is no good answer to "what is a reasonablefeerate up to which we need to make contracts enforceable onchain?", there is definitely room for aninsurance market. But this enters the realm of opinions. Although i do have some (having discussedthis topic for the past years with different people), i would like to keep this post focused on thetechnical aspects of this problem.[0] As far as i can tell, having offchain contracts be enforceable onchain by confirming atransaction before the expiration of a timelock is a widely agreed-upon approach. And i don't thinkwe can opt for any other fundamentally different one, as you want to know you can claim back yourcoins from a contract after a deadline before taking part in it.[1] The Real Revault (tm) involves more transactions, but for the sake of conciseness i onlydetailed a minimum instance of the problem.[2] Only presigning part of the Unvault transactions allows to only delegate part of the coins,which can be abstracted as "delegate x% of your stash" in the user interface.[7] Of course this assumes a combinatorial coin selection, but i believe it's ok given we limit thenumber of coins beforehand.[8] Although there is the argument to outbid a censorship, anyone censoring you isn't necessarily aminer._______________________________________________bitcoin-dev mailing list_______________________________________________bitcoin-dev mailing list