* Re: [bitcoin-dev] Thoughts on fee bumping
@ 2022-02-18 2:08 Prayank
0 siblings, 0 replies; 25+ messages in thread
From: Prayank @ 2022-02-18 2:08 UTC (permalink / raw)
To: James Obeirne; +Cc: Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 326 bytes --]
> If anyone has any indication that there are miners running forks of bitcoind that change this behavior, I'd be curious to know it.
It is possible because some mining pools use bitcoind with custom patches.
Example: https://twitter.com/0xB10C/status/1461392912600776707 (f2pool)
--
Prayank
A3B1 E430 2298 178F
[-- Attachment #2: Type: text/html, Size: 663 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [bitcoin-dev] Thoughts on fee bumping
2022-02-17 18:18 ` James O'Beirne
@ 2022-02-18 9:01 ` darosior
0 siblings, 0 replies; 25+ messages in thread
From: darosior @ 2022-02-18 9:01 UTC (permalink / raw)
To: james.obeirne, aj; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 2699 bytes --]
James,
You seem to imply that the scenario described isn't prevented today. It is. The mempool acceptance for a replacement not only
depend on the transaction feerate but also the transaction fee [0]. That's why i raised it in the first place...
Antoine
[0] https://github.com/bitcoin/bitcoin/blob/66636ca438cb65fb18bcaa4540856cef0cee2029/src/validation.cpp#L944-L947
Of course if you are evicting transactions then you don't have the issue i mentioned, so it's fine doing so.
-------- Original Message --------
On Feb 17, 2022, 19:18, James O'Beirne < james.obeirne@gmail.com> wrote:
>> Is it really true that miners do/should care about that?
>
> De facto, any miner running an unmodified version of bitcoind doesn't
> care about anything aside from ancestor fee rate, given that the
> BlockAssembler as-written orders transactions for inclusion by
> descending ancestor fee-rate and then greedily adds them to the block
> template. [0]
>
> If anyone has any indication that there are miners running forks of
> bitcoind that change this behavior, I'd be curious to know it.
>
> Along the lines of what AJ wrote, optimal transaction selection is
> NP-hard (knapsack problem). Any time that a miner spends deciding how
> to assemble the next block is time not spent grinding on the nonce, and
> so I'm skeptical that miners in practice are currently doing anything
> that isn't fast and simple like the default implementation: sorting
> fee-rate in descending order and then greedily packing.
>
> But it would be interesting to hear evidence to the contrary.
>
> ---
>
> You can make the argument that transaction selection is just a function
> of mempool contents, and so mempool maintenance criteria might be the
> thing to look at. Mempool acceptance is gated based on a minimum
> feerate[1]. Mempool eviction (when running low on space) happens on
> the basis of max(self_feerate, descendant_feerate) [2]. So even in the
> mempool we're still talking in terms of fee rates, not absolute fees.
>
> That presents us with the "is/ought" problem: just because the mempool
> *is* currently gating only on fee rate doesn't mean that's optimal. But
> if the whole point of the mempool is to hold transactions that will be
> mined, and if there's good reason that txns are chosen for mining based
> on fee rate (it's quick and good enough), then it seems like fee rate
> is the approximation that should ultimately prevail for txn
> replacement.
>
> [0]:
> https://github.com/bitcoin/bitcoin/blob/master/src/node/miner.cpp#L310-L320
> [1]:
> https://github.com/bitcoin/bitcoin/blob/master/src/txmempool.cpp#L1106
> [2]:
> https://github.com/bitcoin/bitcoin/blob/master/src/txmempool.cpp#L1138-L1144
[-- Attachment #2: Type: text/html, Size: 4299 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [bitcoin-dev] Thoughts on fee bumping
@ 2022-02-18 0:54 Prayank
0 siblings, 0 replies; 25+ messages in thread
From: Prayank @ 2022-02-18 0:54 UTC (permalink / raw)
To: aj; +Cc: Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 1029 bytes --]
> I suspect the "economically rational" choice would be to happily trade off that immediate loss against even a small chance of a simpler policy encouraging higher adoption of bitcoin, _or_ a small chance of more on-chain activity due to higher adoption of bitcoin protocols like lightning and thus a lower chance of an empty mempool in future.
Is this another way of saying a few developers will decide RBF policy for miners and they should follow it because it is the only way bitcoin gets more adoption? On-chain activity is dependent on lot of things. I suspect any change in policy will change it any time soon and miners should have the freedom to decide things that aren't consensus rules.
Lightning network contributes to on-chain activity only with opening and closing of channels. Based on the chart I see in the below link for channels opened/closed per block, its contribution is less than 1% in fees:
https://txstats.com/dashboard/db/lightning-network?orgId=1&from=now-6M&to=now
--
Prayank
A3B1 E430 2298 178F
[-- Attachment #2: Type: text/html, Size: 1417 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [bitcoin-dev] Thoughts on fee bumping
2022-02-17 14:32 ` Anthony Towns
2022-02-17 18:18 ` James O'Beirne
@ 2022-02-18 0:35 ` Antoine Riard
1 sibling, 0 replies; 25+ messages in thread
From: Antoine Riard @ 2022-02-18 0:35 UTC (permalink / raw)
To: Anthony Towns, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 16644 bytes --]
While I roughly agree with the thesis that different replacement policies
offer marginal block reward gains _in the current state_ of the ecosystem,
I would be more conservative about extending the conclusions to the
medium/long-term future.
> I suspect the "economically rational" choice would be to happily trade
> off that immediate loss against even a small chance of a simpler policy
> encouraging higher adoption of bitcoin, _or_ a small chance of more
> on-chain activity due to higher adoption of bitcoin protocols like
> lightning and thus a lower chance of an empty mempool in future.
This is making the assumption that the economic interests of the different
class of actors in the Bitcoin ecosystem are not only well-understood but
also aligned. We have seen in the past mining actors behaviors delaying the
adoption of protocol upgrades which were expected to encourage higher
adoption of Bitcoin. Further, if miners likely have an incentive to see an
increase of on-chain activity, there is also the possibility that lightning
will be so throughput-efficient to drain mempools backlog, to a point where
the block demand is not high enough to pay back the cost of mining hardware
and operational infrastructure. Or at least not matching the return on
mining investments expectations.
Of course, it could be argued that as a utxo-sharing protocol like
lightning just compresses the number of payments per block space unit, it
lowers the fees burden, thus making Bitcoin as a payment system far more
attractive for a wider population of users. In fine increasing the block
space demand and satisfying the miners.
In the state of today's knowledge, this hypothesis sounds the most
plausible. Though, I would say it's better to be cautious until we
understand better the interactions between the different layers of the
Bitcoin ecosystem ?
> Certainly those percentages can be expected to double every four years as
> the block reward halves (assuming we don't also reduce the min relay fee
> and block min tx fee), but I think for both miners and network stability,
> it'd be better to have the mempool backlog increase over time, which
> would both mean there's no/less need to worry about the special case of
> the mempool being empty, and give a better incentive for people to pay
> higher fees for quicker confirmations.
Intuitively, if we assume that liquidity isn't free on lightning [0], there
should be a subjective equilibrium where it's cheaper to open new channels
to reduce one's own graph traversal instead of paying too high routing fees.
As the core of the network should start to be more busy, I think we should
see more LN actors doing that kind of arbitrage, guaranteeing in the
long-term mempools backlog.
> If you really want to do that
> optimally, I think you have to have a mempool that retains conflicting
> txs and runs a dynamic programming solution to pick the best set, rather
> than today's simple greedy algorithms both for building the block and
> populating the mempool?
As of today, I think power efficiency of mining chips and access to
affordable sources of energy are more significant factors of the
rentability of mining operations rather than optimality of block
construction/replacement policy. IMO, making the argument that small deltas
in block reward gains aren't that much relevant.
That said, the former factors might become a commodity, and the latter one
become a competitive advantage. It could incentivize the development of
such greedy algorithms, potentially in a covert way as we have seen with
AsicBoost ?
> Is there a plausible example where the difference isn't that marginal?
The paradigm might change in the future. If we see the deployment of
channel factories/payment pools, we might have users competing to spend a
shared-utxo with different liquidity needs and thus ready to overbid. Lack
of a "conflict pool" logic might make you lose income.
> Always accepting (package/descendent) fee rate increases removes the
possibility of pinning entirely, I think
I think the pinnings we're affected with today are due to the ability of a
malicious counterparty to halt the on-chain resolution of the channel. The
presence of a pinning commitment transaction with low-chance of
confirmation (abuse of BIP125 rule 3)
prevents the honest counterparty to fee-bump her own version of the
commitment, thus redeeming a HTLC before timelock expiration. As long as
one commitment confirms, independently of who issued it, the pinning is
over. I think moving to replace-by-feerate allows the honest counterparty
to fee-bump her commitment, thus offering a compelling block space demand,
or forces the malicious counterparty to enter in a fee race.
To gather my thinking on the subject, the replace-by-feerate policy could
produce lower fees blocks in the presence of today's environment of
empty/low-fulfilled blocks. That said, the delta sounds marginal enough
w.r.t other factors of mining business units
to not be worried (or at least low-key) about the potential implications on
centralization. If the risk is perceived as too intolerable, it could be
argued an intermediate solution would be to deploy a "dual" RBF policy
(replace-by-fee for the top of the mempool, replace-by-feerate
for the remaining part).
Still, I believe we might have to adopt more sophisticated replacement
policies in the long term to level the field among the mining ecosystem if
block construction/mempool acceptance strategies become a competitive
factor. Default to do so might provoke a latent centralization of mining
due to heterogeneity in the block reward offered. This heterogeneity would
also likely downgrade the safety of L2 nodes, as those actors wouldn't be
able to know how to format their fee-bumpings, in the lack of _a_ mempool
replacement standard.
> Note that if we did have this policy, you could abuse it to cheaply drain
> people's mempools: if there was a 300MB backlog, you could publish 2980
> 100kB txs paying a fee rate just below the next block fee, meaning you'd
> kick out the previous backlog and your transactions take up all but the
> top 2MB of the mempool; if you then replace them all with perhaps 2980
> 100B txs paying a slightly higher fee rate, the default mempool will be
> left with only 2.3MB, at an ultimate cost to you of only about 30% of a
> block in fees, and you could then fill the mempool back up by spamming
> 300MB of ultra low fee rate txs.
I believe we might have bandwidth-bleeding issues with our current
replacement policy. I think it would be good to have a cost estimate of
them and ensure a newer replacement policy would stay in the same bounds.
> I think spam prevention at the outbound relay level isn't enough to
> prevent that: an attacker could contact every public node and relay the
> txs directly, clearing out the mempool of most public nodes directly. So
> you'd want some sort of spam prevention on inbound txs too?
That we have to think about replacement spam prevention sounds reasonable
to me. I would be worried about utxo-based replacement limitations which
could be abused in the context of multi-party protocol (introducing a new
pinning vector). One solution
could be to have a per-party transaction "tag" and allocate a replacement
slot in function ? Maybe preventing a malicious counterparty to abuse a
"global" utxo slot during periods of low fees...
Antoine
[0] https://twitter.com/alexbosworth/status/1476946257939628035
Le jeu. 17 févr. 2022 à 09:32, Anthony Towns via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :
> On Thu, Feb 10, 2022 at 07:12:16PM -0500, Matt Corallo via bitcoin-dev
> wrote:
> > This is where *all* the complexity comes from. If our goal is to "ensure
> a
> > bump increases a miner's overall revenue" (thus not wasting relay for
> > everyone else), then we precisely *do* need
> > > Special consideration for "what should be in the next
> > > block" and/or the caching of block templates seems like an imposing
> > > dependency
> > Whether a transaction increases a miner's revenue depends precisely on
> > whether the transaction (package) being replaced is in the next block -
> if
> > it is, you care about the absolute fee of the package and its
> replacement.
>
> On Thu, Feb 10, 2022 at 11:44:38PM +0000, darosior via bitcoin-dev wrote:
> > It's not that simple. As a miner, if i have less than 1vMB of
> transactions in my mempool. I don't want a 10sats/vb transaction paying
> 100000sats by a 100sats/vb transaction paying only 10000sats.
>
> Is it really true that miners do/should care about that?
>
> If you did this particular example, the miner would be losing 90k sats
> in fees, which would be at most 1.44 *millionths* of a percent of the
> block reward with the subsidy at 6.25BTC per block, even if there were
> no other transactions in the mempool. Even cumulatively, 10sats/vb over
> 1MB versus 100sats/vb over 10kB is only a 1.44% loss of block revenue.
>
> I suspect the "economically rational" choice would be to happily trade
> off that immediate loss against even a small chance of a simpler policy
> encouraging higher adoption of bitcoin, _or_ a small chance of more
> on-chain activity due to higher adoption of bitcoin protocols like
> lightning and thus a lower chance of an empty mempool in future.
>
> If the network has an "empty mempool" (say less than 2MvB-10MvB of
> backlog even if you have access to every valid 1+ sat/vB tx on any node
> connected to the network), then I don't think you'll generally have txs
> with fee rates greater than ~20 sat/vB (ie 20x the minimum fee rate),
> which means your maximum loss is about 3% of block revenue, at least
> while the block subsidy remains at 6.25BTC/block.
>
> Certainly those percentages can be expected to double every four years as
> the block reward halves (assuming we don't also reduce the min relay fee
> and block min tx fee), but I think for both miners and network stability,
> it'd be better to have the mempool backlog increase over time, which
> would both mean there's no/less need to worry about the special case of
> the mempool being empty, and give a better incentive for people to pay
> higher fees for quicker confirmations.
>
> If we accept that logic (and assuming we had some additional policy
> to prevent p2p relay spam due to replacement txs), we could make
> the mempool accept policy for replacements just be (something like)
> "[package] feerate is greater than max(descendent fee rate)", which
> seems like it'd be pretty straightforward to deal with in general?
>
>
>
> Thinking about it a little more; I think the decision as to whether
> you want to have a "100kvB at 10sat/vb" tx or a conflicting "1kvB at
> 100sat/vb" tx in your mempool if you're going to take into account
> unrelated, lower fee rate txs that are also in the mempool makes block
> building "more" of an NP-hard problem and makes the greedy solution
> we've currently got much more suboptimal -- if you really want to do that
> optimally, I think you have to have a mempool that retains conflicting
> txs and runs a dynamic programming solution to pick the best set, rather
> than today's simple greedy algorithms both for building the block and
> populating the mempool?
>
> For example, if you had two such replacements come through the network,
> a miner could want to flip from initially accepting the first replacement,
> to unaccepting it:
>
> Initial mempool: two big txs at 100k each, many small transactions at
> 15s/vB and 1s/vB
>
> [100kvB at 20s/vB] [850kvB at 15s/vB] [100kvB at 12s/vB] [1000kvB at
> 1s/vB]
> -> 0.148 BTC for 1MvB (100*20 + 850*15 + 50*1)
>
> Replacement for the 20s/vB tx paying a higher fee rate but lower total
> fee; that's worth including:
>
> [10kvB at 100s/vB] [850kvB at 15s/vB] [100kvB at 12s/vB [1000kvB at 1s/vB]
> -> 0.1499 BTC for 1MvB (10*100 + 850*15 + 100*12 + 40*1)
>
> Later, replacement for the 12s/vB tx comes in, also paying higher fee
> rate but lower total fee. Worth including, but only if you revert the
> original replacement:
>
> [100kvB at 20s/vB] [50kvB at 20s/vB] [850kvB at 15s/vB] [1000kvB at 1s/vB]
> -> 0.16 BTC for 1MvB (150*20 + 850*15)
>
> [10kvB at 100s/vB] [50kvB at 20s/vB] [850kvB at 15s/vB] [1000kvB at 1s/vB]
> -> 0.1484 BTC for 1MvB (10*100 + 50*20 + 850*15 + 90*1)
>
> Algorithms/mempool policies you might have, and their results with
> this example:
>
> * current RBF rules: reject both replacements because they don't
> increase the absolute fee, thus get the minimum block fees of
> 0.148 BTC
>
> * reject RBF unless it increases the fee rate, and get 0.1484 BTC in
> fees
>
> * reject RBF if it's lower fee rate or immediately decreases the block
> reward: so, accept the first replacement, but reject the second,
> getting 0.1499 BTC
>
> * only discard a conflicting tx when it pays both a lower fee rate and
> lower absolute fees, and choose amongst conflicting txs optimally
> via some complicated tx allocation algorithm when generating a block,
> and get 0.16 BTC
>
> In this example, those techniques give 92.5%, 92.75%, 93.69% and 100% of
> total possible fees you could collect; and 99.813%, 99.819%, 99.84% and
> 100% of the total possible block reward at 6.25BTC/block.
>
> Is there a plausible example where the difference isn't that marginal?
> Seems like the simplest solution of just checking the (package/descendent)
> fee rate increases works well enough here at least.
>
> If 90kvB of unrelated txs at 14s/vB were then added to the mempool, then
> replacing both txs becomes (just barely) optimal, meaning the smartest
> possible algorithm and the dumbest one of just considering the fee rate
> produce the same result, while the others are worse:
>
> [10kvB at 100s/vB] [50kvB at 20s/vB] [850kvB at 15s/vB] [90kvB at 14s/vB]
> -> 0.1601 BTC for 1MvB
> (accepting both)
>
> [100kvB at 20s/vB] [50kvB at 20s/vB] [850kvB at 15s/vB] [90kvB at 14s/vB]
> -> 0.1575 BTC for 1MvB
> (accepting only the second replacement)
>
> [10kvB at 100s/vB] [850kvB at 15s/vB] [90kvB at 14s/vB] [100kvB at 12s/vB]
> -> 0.1551 BTC for 1MvB
> (first replacement only, optimal tx selection: 10*100, 850*15, 50*14,
> 100*12)
>
> [100kvB at 20s/vB] [850kvB at 15s/vB] [90kvB at 14s/vB] [100kvB at 12s/vB]
> -> 0.1545 BTC for 1MvB
> (accepting neither replacement)
>
> [10kvB at 100s/vB] [850kvB at 15s/vB] [90kvB at 14s/vB] [100kvB at 12s/vB]
> -> 0.1506 BTC for 1MvB
> (first replacement only, greedy tx selection: 10*100, 850*15, 90*14,
> 50*1)
>
> Always accepting (package/descendent) fee rate increases removes the
> possibility of pinning entirely, I think -- you still have the problem
> where someone else might get a conflicting transaction confirmed first,
> but they can't get a conflicting tx stuck in the mempool without
> confirming if you're willing to pay enough to get it confirmed.
>
>
>
> Note that if we did have this policy, you could abuse it to cheaply drain
> people's mempools: if there was a 300MB backlog, you could publish 2980
> 100kB txs paying a fee rate just below the next block fee, meaning you'd
> kick out the previous backlog and your transactions take up all but the
> top 2MB of the mempool; if you then replace them all with perhaps 2980
> 100B txs paying a slightly higher fee rate, the default mempool will be
> left with only 2.3MB, at an ultimate cost to you of only about 30% of a
> block in fees, and you could then fill the mempool back up by spamming
> 300MB of ultra low fee rate txs.
>
> I think spam prevention at the outbound relay level isn't enough to
> prevent that: an attacker could contact every public node and relay the
> txs directly, clearing out the mempool of most public nodes directly. So
> you'd want some sort of spam prevention on inbound txs too?
>
> So I think you'd need to carefully think about relay spam before making
> this sort of change. Also, if we had tx rebroadcast implemented then
> having just a few nodes with large mempools might allow the network to
> recover from this situation automatically.
>
> Cheers,
> aj
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 18173 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [bitcoin-dev] Thoughts on fee bumping
2022-02-17 14:32 ` Anthony Towns
@ 2022-02-17 18:18 ` James O'Beirne
2022-02-18 9:01 ` darosior
2022-02-18 0:35 ` Antoine Riard
1 sibling, 1 reply; 25+ messages in thread
From: James O'Beirne @ 2022-02-17 18:18 UTC (permalink / raw)
To: Anthony Towns; +Cc: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 2035 bytes --]
> Is it really true that miners do/should care about that?
De facto, any miner running an unmodified version of bitcoind doesn't
care about anything aside from ancestor fee rate, given that the
BlockAssembler as-written orders transactions for inclusion by
descending ancestor fee-rate and then greedily adds them to the block
template. [0]
If anyone has any indication that there are miners running forks of
bitcoind that change this behavior, I'd be curious to know it.
Along the lines of what AJ wrote, optimal transaction selection is
NP-hard (knapsack problem). Any time that a miner spends deciding how
to assemble the next block is time not spent grinding on the nonce, and
so I'm skeptical that miners in practice are currently doing anything
that isn't fast and simple like the default implementation: sorting
fee-rate in descending order and then greedily packing.
But it would be interesting to hear evidence to the contrary.
---
You can make the argument that transaction selection is just a function
of mempool contents, and so mempool maintenance criteria might be the
thing to look at. Mempool acceptance is gated based on a minimum
feerate[1]. Mempool eviction (when running low on space) happens on
the basis of max(self_feerate, descendant_feerate) [2]. So even in the
mempool we're still talking in terms of fee rates, not absolute fees.
That presents us with the "is/ought" problem: just because the mempool
*is* currently gating only on fee rate doesn't mean that's optimal. But
if the whole point of the mempool is to hold transactions that will be
mined, and if there's good reason that txns are chosen for mining based
on fee rate (it's quick and good enough), then it seems like fee rate
is the approximation that should ultimately prevail for txn
replacement.
[0]:
https://github.com/bitcoin/bitcoin/blob/master/src/node/miner.cpp#L310-L320
[1]:
https://github.com/bitcoin/bitcoin/blob/master/src/txmempool.cpp#L1106
[2]:
https://github.com/bitcoin/bitcoin/blob/master/src/txmempool.cpp#L1138-L1144
[-- Attachment #2: Type: text/html, Size: 2508 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [bitcoin-dev] Thoughts on fee bumping
2022-02-11 0:12 ` Matt Corallo
2022-02-14 19:51 ` James O'Beirne
@ 2022-02-17 14:32 ` Anthony Towns
2022-02-17 18:18 ` James O'Beirne
2022-02-18 0:35 ` Antoine Riard
1 sibling, 2 replies; 25+ messages in thread
From: Anthony Towns @ 2022-02-17 14:32 UTC (permalink / raw)
To: Matt Corallo, Bitcoin Protocol Discussion, darosior
On Thu, Feb 10, 2022 at 07:12:16PM -0500, Matt Corallo via bitcoin-dev wrote:
> This is where *all* the complexity comes from. If our goal is to "ensure a
> bump increases a miner's overall revenue" (thus not wasting relay for
> everyone else), then we precisely *do* need
> > Special consideration for "what should be in the next
> > block" and/or the caching of block templates seems like an imposing
> > dependency
> Whether a transaction increases a miner's revenue depends precisely on
> whether the transaction (package) being replaced is in the next block - if
> it is, you care about the absolute fee of the package and its replacement.
On Thu, Feb 10, 2022 at 11:44:38PM +0000, darosior via bitcoin-dev wrote:
> It's not that simple. As a miner, if i have less than 1vMB of transactions in my mempool. I don't want a 10sats/vb transaction paying 100000sats by a 100sats/vb transaction paying only 10000sats.
Is it really true that miners do/should care about that?
If you did this particular example, the miner would be losing 90k sats
in fees, which would be at most 1.44 *millionths* of a percent of the
block reward with the subsidy at 6.25BTC per block, even if there were
no other transactions in the mempool. Even cumulatively, 10sats/vb over
1MB versus 100sats/vb over 10kB is only a 1.44% loss of block revenue.
I suspect the "economically rational" choice would be to happily trade
off that immediate loss against even a small chance of a simpler policy
encouraging higher adoption of bitcoin, _or_ a small chance of more
on-chain activity due to higher adoption of bitcoin protocols like
lightning and thus a lower chance of an empty mempool in future.
If the network has an "empty mempool" (say less than 2MvB-10MvB of
backlog even if you have access to every valid 1+ sat/vB tx on any node
connected to the network), then I don't think you'll generally have txs
with fee rates greater than ~20 sat/vB (ie 20x the minimum fee rate),
which means your maximum loss is about 3% of block revenue, at least
while the block subsidy remains at 6.25BTC/block.
Certainly those percentages can be expected to double every four years as
the block reward halves (assuming we don't also reduce the min relay fee
and block min tx fee), but I think for both miners and network stability,
it'd be better to have the mempool backlog increase over time, which
would both mean there's no/less need to worry about the special case of
the mempool being empty, and give a better incentive for people to pay
higher fees for quicker confirmations.
If we accept that logic (and assuming we had some additional policy
to prevent p2p relay spam due to replacement txs), we could make
the mempool accept policy for replacements just be (something like)
"[package] feerate is greater than max(descendent fee rate)", which
seems like it'd be pretty straightforward to deal with in general?
Thinking about it a little more; I think the decision as to whether
you want to have a "100kvB at 10sat/vb" tx or a conflicting "1kvB at
100sat/vb" tx in your mempool if you're going to take into account
unrelated, lower fee rate txs that are also in the mempool makes block
building "more" of an NP-hard problem and makes the greedy solution
we've currently got much more suboptimal -- if you really want to do that
optimally, I think you have to have a mempool that retains conflicting
txs and runs a dynamic programming solution to pick the best set, rather
than today's simple greedy algorithms both for building the block and
populating the mempool?
For example, if you had two such replacements come through the network,
a miner could want to flip from initially accepting the first replacement,
to unaccepting it:
Initial mempool: two big txs at 100k each, many small transactions at
15s/vB and 1s/vB
[100kvB at 20s/vB] [850kvB at 15s/vB] [100kvB at 12s/vB] [1000kvB at 1s/vB]
-> 0.148 BTC for 1MvB (100*20 + 850*15 + 50*1)
Replacement for the 20s/vB tx paying a higher fee rate but lower total
fee; that's worth including:
[10kvB at 100s/vB] [850kvB at 15s/vB] [100kvB at 12s/vB [1000kvB at 1s/vB]
-> 0.1499 BTC for 1MvB (10*100 + 850*15 + 100*12 + 40*1)
Later, replacement for the 12s/vB tx comes in, also paying higher fee
rate but lower total fee. Worth including, but only if you revert the
original replacement:
[100kvB at 20s/vB] [50kvB at 20s/vB] [850kvB at 15s/vB] [1000kvB at 1s/vB]
-> 0.16 BTC for 1MvB (150*20 + 850*15)
[10kvB at 100s/vB] [50kvB at 20s/vB] [850kvB at 15s/vB] [1000kvB at 1s/vB]
-> 0.1484 BTC for 1MvB (10*100 + 50*20 + 850*15 + 90*1)
Algorithms/mempool policies you might have, and their results with
this example:
* current RBF rules: reject both replacements because they don't
increase the absolute fee, thus get the minimum block fees of
0.148 BTC
* reject RBF unless it increases the fee rate, and get 0.1484 BTC in
fees
* reject RBF if it's lower fee rate or immediately decreases the block
reward: so, accept the first replacement, but reject the second,
getting 0.1499 BTC
* only discard a conflicting tx when it pays both a lower fee rate and
lower absolute fees, and choose amongst conflicting txs optimally
via some complicated tx allocation algorithm when generating a block,
and get 0.16 BTC
In this example, those techniques give 92.5%, 92.75%, 93.69% and 100% of
total possible fees you could collect; and 99.813%, 99.819%, 99.84% and
100% of the total possible block reward at 6.25BTC/block.
Is there a plausible example where the difference isn't that marginal?
Seems like the simplest solution of just checking the (package/descendent)
fee rate increases works well enough here at least.
If 90kvB of unrelated txs at 14s/vB were then added to the mempool, then
replacing both txs becomes (just barely) optimal, meaning the smartest
possible algorithm and the dumbest one of just considering the fee rate
produce the same result, while the others are worse:
[10kvB at 100s/vB] [50kvB at 20s/vB] [850kvB at 15s/vB] [90kvB at 14s/vB]
-> 0.1601 BTC for 1MvB
(accepting both)
[100kvB at 20s/vB] [50kvB at 20s/vB] [850kvB at 15s/vB] [90kvB at 14s/vB]
-> 0.1575 BTC for 1MvB
(accepting only the second replacement)
[10kvB at 100s/vB] [850kvB at 15s/vB] [90kvB at 14s/vB] [100kvB at 12s/vB]
-> 0.1551 BTC for 1MvB
(first replacement only, optimal tx selection: 10*100, 850*15, 50*14, 100*12)
[100kvB at 20s/vB] [850kvB at 15s/vB] [90kvB at 14s/vB] [100kvB at 12s/vB]
-> 0.1545 BTC for 1MvB
(accepting neither replacement)
[10kvB at 100s/vB] [850kvB at 15s/vB] [90kvB at 14s/vB] [100kvB at 12s/vB]
-> 0.1506 BTC for 1MvB
(first replacement only, greedy tx selection: 10*100, 850*15, 90*14, 50*1)
Always accepting (package/descendent) fee rate increases removes the
possibility of pinning entirely, I think -- you still have the problem
where someone else might get a conflicting transaction confirmed first,
but they can't get a conflicting tx stuck in the mempool without
confirming if you're willing to pay enough to get it confirmed.
Note that if we did have this policy, you could abuse it to cheaply drain
people's mempools: if there was a 300MB backlog, you could publish 2980
100kB txs paying a fee rate just below the next block fee, meaning you'd
kick out the previous backlog and your transactions take up all but the
top 2MB of the mempool; if you then replace them all with perhaps 2980
100B txs paying a slightly higher fee rate, the default mempool will be
left with only 2.3MB, at an ultimate cost to you of only about 30% of a
block in fees, and you could then fill the mempool back up by spamming
300MB of ultra low fee rate txs.
I think spam prevention at the outbound relay level isn't enough to
prevent that: an attacker could contact every public node and relay the
txs directly, clearing out the mempool of most public nodes directly. So
you'd want some sort of spam prevention on inbound txs too?
So I think you'd need to carefully think about relay spam before making
this sort of change. Also, if we had tx rebroadcast implemented then
having just a few nodes with large mempools might allow the network to
recover from this situation automatically.
Cheers,
aj
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [bitcoin-dev] Thoughts on fee bumping
2022-02-16 19:18 ` James O'Beirne
@ 2022-02-16 20:36 ` Billy Tetrud
0 siblings, 0 replies; 25+ messages in thread
From: Billy Tetrud @ 2022-02-16 20:36 UTC (permalink / raw)
To: James O'Beirne; +Cc: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 2279 bytes --]
> the validity of a sponsor txn is "monotonically" true at any point after
the inclusion of the sponsored txn in a block.
Oh I see his point now. If sponsors were valid at any point in the future,
not only would a utxo index be needed but an index of all transactions.
Yeah, that wouldn't be good. And the solution of bounding the sponsor
transaction to be valid in some window after the transaction is included
doesn't solve the original point of making sponsor transactions never
become invalid. Thanks for the clarification James, and good point Jeremy.
On Wed, Feb 16, 2022 at 1:19 PM James O'Beirne <james.obeirne@gmail.com>
wrote:
> > What do you mean by monotone in the context of sponsor transactions?
>
> I take this to mean that the validity of a sponsor txn is
> "monotonically" true at any point after the inclusion of the sponsored
> txn in a block.
>
> > And when you say tx-index, do you mean an index for looking up a
> > transaction by its ID? Is that not already something nodes do?
>
> Indeed, not all nodes have this ability. Each bitcoind node has a map
> of unspent coins which can be referenced by outpoint i.e.(txid, index),
> but the same isn't true for all historical transactions. I
> (embarrassingly) forgot this in the prior post.
>
> The map of (txid -> transaction) for all time is a separate index that
> must be enabled via the `-txindex=1` flag; it isn't enabled by default
> because it isn't required for consensus and its growth is unbounded.
>
> > > The current consensus threshold for transactions to become invalid
> > > is a 100 block reorg
> >
> > What do you mean by this? The only 100 block period I'm aware of is
> > the coinbase cooldown period.
>
> If there were a reorg deeper than 100 blocks, it would permanently
> invalidate any transactions spending the recently-matured coinbase
> subsidy in any block between $new_reorg_tip and ($former_tip_height -
> 100). These invalidated spends would not be able to be reorganized
> into a new replacement chain.
>
> How this differs in practice or principle from a "regular" double-spend
> via reorg I'll leave for another message. I'm not sure that I understand
> that myself. Personally I think if we hit a >100 block reorg, we've got
> bigger issues than coinbase invalidation.
>
[-- Attachment #2: Type: text/html, Size: 2790 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [bitcoin-dev] Thoughts on fee bumping
2022-02-16 2:54 ` Billy Tetrud
@ 2022-02-16 19:18 ` James O'Beirne
2022-02-16 20:36 ` Billy Tetrud
0 siblings, 1 reply; 25+ messages in thread
From: James O'Beirne @ 2022-02-16 19:18 UTC (permalink / raw)
To: Billy Tetrud, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 1575 bytes --]
> What do you mean by monotone in the context of sponsor transactions?
I take this to mean that the validity of a sponsor txn is
"monotonically" true at any point after the inclusion of the sponsored
txn in a block.
> And when you say tx-index, do you mean an index for looking up a
> transaction by its ID? Is that not already something nodes do?
Indeed, not all nodes have this ability. Each bitcoind node has a map
of unspent coins which can be referenced by outpoint i.e.(txid, index),
but the same isn't true for all historical transactions. I
(embarrassingly) forgot this in the prior post.
The map of (txid -> transaction) for all time is a separate index that
must be enabled via the `-txindex=1` flag; it isn't enabled by default
because it isn't required for consensus and its growth is unbounded.
> > The current consensus threshold for transactions to become invalid
> > is a 100 block reorg
>
> What do you mean by this? The only 100 block period I'm aware of is
> the coinbase cooldown period.
If there were a reorg deeper than 100 blocks, it would permanently
invalidate any transactions spending the recently-matured coinbase
subsidy in any block between $new_reorg_tip and ($former_tip_height -
100). These invalidated spends would not be able to be reorganized
into a new replacement chain.
How this differs in practice or principle from a "regular" double-spend
via reorg I'll leave for another message. I'm not sure that I understand
that myself. Personally I think if we hit a >100 block reorg, we've got
bigger issues than coinbase invalidation.
[-- Attachment #2: Type: text/html, Size: 1810 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [bitcoin-dev] Thoughts on fee bumping
2022-02-15 21:38 ` Jeremy Rubin
@ 2022-02-16 2:54 ` Billy Tetrud
2022-02-16 19:18 ` James O'Beirne
0 siblings, 1 reply; 25+ messages in thread
From: Billy Tetrud @ 2022-02-16 2:54 UTC (permalink / raw)
To: Jeremy Rubin, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 5940 bytes --]
@Jeremy
> there are technical reasons for sponsors to not be monotone. Mostly that
it requires the maintenance of an additional permanent TX-Index, making
Bitcoin's state grow at a much worse rate
What do you mean by monotone in the context of sponsor transactions? And when
you say tx-index, do you mean an index for looking up a transaction by its
ID? Is that not already something nodes do?
> The sponsors proposal is a change from Epsilon-Strong Reorgability to
Epsilon-Weak Reorgability
It doesn't look like you defined that term in your list. Did you mean what
you listed as "Epsilon: Simple Existential Reorgability"? If so, I would
say that should be sufficient. I'm not sure I would even distinguish
between the "strong" and "simple" versions of these things, tho you could
talk about things that make reorgs more or less computationally difficult
on a spectrum. As long as the computational difficulty isn't significant
for miners vs their other computational costs, the computation isn't really
a problem.
@Russell
> The current consensus threshold for transactions to become invalid is a
100 block reorg
What do you mean by this? The only 100 block period I'm aware of is the
coinbase cooldown period.
> I promise to personally build a wallet that always creates transactions
on the verge of becoming invalid should anyone ever implement a feature
that violates this tx validity principle.
Could you explain how you would build a wallet like that with a sponsor
transaction as described by Jeremy? What damage do you think such a wallet
could do? As far as I can tell, such a wallet is very unlikely to do more
damage to the network than it does to the user of that wallet.
On Tue, Feb 15, 2022 at 3:39 PM Jeremy Rubin via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> The difference between sponsors and this issue is more subtle. The issue
> Suhas raised was with a variant of sponsors trying to address a second
> criticism, not sponsors itself, which is secure against this.
>
> I think I can make this clear by defining a few different properties:
>
> Strong Reorgability: The transaction graph can be arbitrarily reorged into
> any series of blocks as long as dependency order/timelocks are respected.
> Simple Existential Reorgability: The transaction graph can be reorged into
> a different series of blocks, and it is not computationally difficult to
> find such an ordering.
> Epsilon-Strong Reorgability: The transaction graph can be arbitrarily
> reorged into any series of blocks as long as dependency order/timelocks are
> respected, up to Epsilon blocks.
> Epsilon: Simple Existential Reorgability: The transaction graph can be
> reorged into a different series of blocks, and it is not computationally
> difficult to find such an ordering, up to epsilon blocks.
> Perfect Reorgability: The transaction graph can be reorged into a
> different series of blocks, but the transactions themselves are already
> locked in.
>
> Perfect Reorgability doesn't exist in Bitcoin because unconfirmed
> transactions can be double spent which invalidates descendants. Notably,
> for a subset of the graph which is CTV Congestion control tree expansions,
> perfect reorg ability would exist, so it's not just a bullshit concept to
> think about :)
>
> The sponsors proposal is a change from Epsilon-Strong Reorgability to
> Epsilon-Weak Reorgability. It's not clear to me that there is any
> functional reason to rely on Strongness when Bitcoin's reorgability is
> already not Perfect, so a reorg generator with malicious intent can already
> disturb the tx graph. Epsion-Weak Reorgability seems to be a sufficient
> property.
>
> Do you disagree with that?
>
> Best,
>
> Jeremy
>
> --
> @JeremyRubin <https://twitter.com/JeremyRubin>
>
> On Tue, Feb 15, 2022 at 12:25 PM Russell O'Connor via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>>
>>
>>> >> 2. (from Suhas) "once a valid transaction is created, it should not
>>> become invalid later on unless the inputs are double-spent."
>>> > This doesn't seem like a huge concern to me
>>>
>>> I agree that this shouldn't be a concern. In fact, I've asked numerous
>>> people in numerous places what practical downside there is to transactions
>>> that become invalid, and I've heard basically radio silence other than one
>>> off hand remark by satoshi at the dawn of time which didn't seem to me to
>>> have good reasoning. I haven't seen any downside whatsoever of transactions
>>> that can become invalid for anyone waiting the standard 6 confirmations -
>>> the reorg risks only exists for people not waiting for standard
>>> finalization. So I don't think we should consider that aspect of a
>>> sponsorship transaction that can only be mined with the transaction it
>>> sponsors to be a problem unless a specific practical problem case can be
>>> identified. Even if a significant such case was identified, an easy
>>> solution would be to simply allow sponsorship transactions to be mined on
>>> or after the sponsored transaction is mined.
>>>
>>
>> The downside is that in a 6 block reorg any transaction that is moved
>> past its expiration date becomes invalid and all its descendants become
>> invalid too.
>>
>> The current consensus threshold for transactions to become invalid is a
>> 100 block reorg, and I see no reason to change this threshold. I promise
>> to personally build a wallet that always creates transactions on the verge
>> of becoming invalid should anyone ever implement a feature that violates
>> this tx validity principle.
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 9090 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [bitcoin-dev] Thoughts on fee bumping
2022-02-15 20:24 ` Russell O'Connor
2022-02-15 20:53 ` James O'Beirne
@ 2022-02-15 21:38 ` Jeremy Rubin
2022-02-16 2:54 ` Billy Tetrud
1 sibling, 1 reply; 25+ messages in thread
From: Jeremy Rubin @ 2022-02-15 21:38 UTC (permalink / raw)
To: Russell O'Connor, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 3832 bytes --]
The difference between sponsors and this issue is more subtle. The issue
Suhas raised was with a variant of sponsors trying to address a second
criticism, not sponsors itself, which is secure against this.
I think I can make this clear by defining a few different properties:
Strong Reorgability: The transaction graph can be arbitrarily reorged into
any series of blocks as long as dependency order/timelocks are respected.
Simple Existential Reorgability: The transaction graph can be reorged into
a different series of blocks, and it is not computationally difficult to
find such an ordering.
Epsilon-Strong Reorgability: The transaction graph can be arbitrarily
reorged into any series of blocks as long as dependency order/timelocks are
respected, up to Epsilon blocks.
Epsilon: Simple Existential Reorgability: The transaction graph can be
reorged into a different series of blocks, and it is not computationally
difficult to find such an ordering, up to epsilon blocks.
Perfect Reorgability: The transaction graph can be reorged into a different
series of blocks, but the transactions themselves are already locked in.
Perfect Reorgability doesn't exist in Bitcoin because unconfirmed
transactions can be double spent which invalidates descendants. Notably,
for a subset of the graph which is CTV Congestion control tree expansions,
perfect reorg ability would exist, so it's not just a bullshit concept to
think about :)
The sponsors proposal is a change from Epsilon-Strong Reorgability to
Epsilon-Weak Reorgability. It's not clear to me that there is any
functional reason to rely on Strongness when Bitcoin's reorgability is
already not Perfect, so a reorg generator with malicious intent can already
disturb the tx graph. Epsion-Weak Reorgability seems to be a sufficient
property.
Do you disagree with that?
Best,
Jeremy
--
@JeremyRubin <https://twitter.com/JeremyRubin>
On Tue, Feb 15, 2022 at 12:25 PM Russell O'Connor via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>
>> >> 2. (from Suhas) "once a valid transaction is created, it should not
>> become invalid later on unless the inputs are double-spent."
>> > This doesn't seem like a huge concern to me
>>
>> I agree that this shouldn't be a concern. In fact, I've asked numerous
>> people in numerous places what practical downside there is to transactions
>> that become invalid, and I've heard basically radio silence other than one
>> off hand remark by satoshi at the dawn of time which didn't seem to me to
>> have good reasoning. I haven't seen any downside whatsoever of transactions
>> that can become invalid for anyone waiting the standard 6 confirmations -
>> the reorg risks only exists for people not waiting for standard
>> finalization. So I don't think we should consider that aspect of a
>> sponsorship transaction that can only be mined with the transaction it
>> sponsors to be a problem unless a specific practical problem case can be
>> identified. Even if a significant such case was identified, an easy
>> solution would be to simply allow sponsorship transactions to be mined on
>> or after the sponsored transaction is mined.
>>
>
> The downside is that in a 6 block reorg any transaction that is moved past
> its expiration date becomes invalid and all its descendants become invalid
> too.
>
> The current consensus threshold for transactions to become invalid is a
> 100 block reorg, and I see no reason to change this threshold. I promise
> to personally build a wallet that always creates transactions on the verge
> of becoming invalid should anyone ever implement a feature that violates
> this tx validity principle.
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 5663 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [bitcoin-dev] Thoughts on fee bumping
2022-02-15 20:53 ` James O'Beirne
@ 2022-02-15 21:37 ` Jeremy Rubin
0 siblings, 0 replies; 25+ messages in thread
From: Jeremy Rubin @ 2022-02-15 21:37 UTC (permalink / raw)
To: James O'Beirne, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 4009 bytes --]
James,
Unfortunately, there are technical reasons for sponsors to not be monotone.
Mostly that it requires the maintenance of an additional permanent
TX-Index, making Bitcoin's state grow at a much worse rate. Instead, you
could introduce a time-bound for inclusion, e.g. 100 blocks. However, this
time-bounded version has the issue that Roconnor raised which is that
validity "stops" after a certain time, hurting reorganization.
However, If you wanted to map this conceptually onto existing tx indexes,
you could have an output with exactly the script `<100 blocks> OP_CSV` and
then allow sponsor references to be pruned after that output is "garbage
collected" by pruning it out of a block. This would be a way that
sponsorship would be opt-in (must have the flag output) and then sponsors
observations of txid existence would be only guaranteed to work for 100
blocks after which it could be garbage collected by a miner.
It's not a huge leap to say that this behavior should be made entirely
"virtual", as you are essentially arguing that there exists a transaction
graph we could construct that would be equivalent to the graph were we to
actually have such an output / spends relationship. Since the property we
care about is about all graphs, that a specific one could exist that has
the same dependency / invalidity relationships during a reorg is important
for the theory of bitcoin transaction execution.
So it really isn't clear to me that we're hurting the transaction graph
properties that severely with changes in this family. It's also not clear
to me that having a TXINDEX is a huge issue given that making a dust-out
per tx would have the same impact (and people might do it if it's
functionally useful, so just making it default behavior would at least help
us optimize it to be done through e.g. a separate witness space/utreexo-y
thing).
Another consideration is to make the outputs from sponsor txn subject to a
100 block cool-off period. E.g., so even if you have your inverse timelock,
adding a constraint that all outputs then have something similar to
fCoinbase set on them (for spending timelocks only) would mean that little
reorgs could not disturb the tx graph, although this poses a UX challenge
for wallets that aim to bump often (e.g., 1 bump per block would mean you
need to maintain 100 outputs).
Lastly, it's pretty clear from a UX perspective that I should not want to
pay miners who did *not* mine my transactions! Therefore, it would be
natural to see if you pay a high enough fee that users might want to cancel
their (now very desirable) stale fee bumps by replacing it with something
more useful to them. So allowing sponsors to be in subsequent blocks might
make it rational for users to do more transactions, which increases the
costs of such an approach.
All things considered, I favor the simple version of just having sponsors
only valid for the block their target is co-resident in.
Jeremy
--
@JeremyRubin <https://twitter.com/JeremyRubin>
On Tue, Feb 15, 2022 at 12:53 PM James O'Beirne via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> > The downside is that in a 6 block reorg any transaction that is moved
> > past its expiration date becomes invalid and all its descendants
> > become invalid too.
>
> Worth noting that the transaction sponsors design is no worse an
> offender on this count than, say, CPFP is, provided we adopt the change
> that sponsored txids are required to be included in the current block
> *or* prior blocks. (The original proposal allowed current block only).
>
> In other words, the sponsored txids are just "virtual inputs" to the
> sponsor transaction.
>
> This is a much different case than e.g. transaction expiry based on
> wall-clock time or block height, which I agree complicates reorgs
> significantly.
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 7485 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [bitcoin-dev] Thoughts on fee bumping
2022-02-15 20:24 ` Russell O'Connor
@ 2022-02-15 20:53 ` James O'Beirne
2022-02-15 21:37 ` Jeremy Rubin
2022-02-15 21:38 ` Jeremy Rubin
1 sibling, 1 reply; 25+ messages in thread
From: James O'Beirne @ 2022-02-15 20:53 UTC (permalink / raw)
To: Russell O'Connor, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 681 bytes --]
> The downside is that in a 6 block reorg any transaction that is moved
> past its expiration date becomes invalid and all its descendants
> become invalid too.
Worth noting that the transaction sponsors design is no worse an
offender on this count than, say, CPFP is, provided we adopt the change
that sponsored txids are required to be included in the current block
*or* prior blocks. (The original proposal allowed current block only).
In other words, the sponsored txids are just "virtual inputs" to the
sponsor transaction.
This is a much different case than e.g. transaction expiry based on
wall-clock time or block height, which I agree complicates reorgs
significantly.
[-- Attachment #2: Type: text/html, Size: 768 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [bitcoin-dev] Thoughts on fee bumping
2022-02-15 17:09 ` Billy Tetrud
@ 2022-02-15 20:24 ` Russell O'Connor
2022-02-15 20:53 ` James O'Beirne
2022-02-15 21:38 ` Jeremy Rubin
0 siblings, 2 replies; 25+ messages in thread
From: Russell O'Connor @ 2022-02-15 20:24 UTC (permalink / raw)
To: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 1600 bytes --]
> >> 2. (from Suhas) "once a valid transaction is created, it should not
> become invalid later on unless the inputs are double-spent."
> > This doesn't seem like a huge concern to me
>
> I agree that this shouldn't be a concern. In fact, I've asked numerous
> people in numerous places what practical downside there is to transactions
> that become invalid, and I've heard basically radio silence other than one
> off hand remark by satoshi at the dawn of time which didn't seem to me to
> have good reasoning. I haven't seen any downside whatsoever of transactions
> that can become invalid for anyone waiting the standard 6 confirmations -
> the reorg risks only exists for people not waiting for standard
> finalization. So I don't think we should consider that aspect of a
> sponsorship transaction that can only be mined with the transaction it
> sponsors to be a problem unless a specific practical problem case can be
> identified. Even if a significant such case was identified, an easy
> solution would be to simply allow sponsorship transactions to be mined on
> or after the sponsored transaction is mined.
>
The downside is that in a 6 block reorg any transaction that is moved past
its expiration date becomes invalid and all its descendants become invalid
too.
The current consensus threshold for transactions to become invalid is a 100
block reorg, and I see no reason to change this threshold. I promise to
personally build a wallet that always creates transactions on the verge of
becoming invalid should anyone ever implement a feature that violates this
tx validity principle.
[-- Attachment #2: Type: text/html, Size: 1933 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [bitcoin-dev] Thoughts on fee bumping
2022-02-15 0:43 ` Antoine Riard
@ 2022-02-15 17:09 ` Billy Tetrud
2022-02-15 20:24 ` Russell O'Connor
0 siblings, 1 reply; 25+ messages in thread
From: Billy Tetrud @ 2022-02-15 17:09 UTC (permalink / raw)
To: Antoine Riard, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 11217 bytes --]
> If you wish to fee-bump transaction X with sponsor, how can you be sure
that transaction isn't present in the majority of network nodes, and X has
_not_ been dropped since your last broadcast ?
You're right that you can't assume your target transaction hasn't been
dropped. However, I assume when James said "No rebroadcast (wasted
bandwidth) is required for the original txn data" he meant that in the
context of the "diff" he was talking about. It would be easy enough to
specify a sponsorship transaction that points to a transaction with a
specific id without *requiring* that transaction to be rebroadcast. If your
partner node has that transaction, no rebroadcast is necessary. If your
partner node doesn't have it, they can request it. That way rebroadcast is
only done when necessary. Correct me if my understanding of your suggestion
is wrong James.
>> 2. (from Suhas) "once a valid transaction is created, it should not
become invalid later on unless the inputs are double-spent."
> This doesn't seem like a huge concern to me
I agree that this shouldn't be a concern. In fact, I've asked numerous
people in numerous places what practical downside there is to transactions
that become invalid, and I've heard basically radio silence other than one
off hand remark by satoshi at the dawn of time which didn't seem to me to
have good reasoning. I haven't seen any downside whatsoever of transactions
that can become invalid for anyone waiting the standard 6 confirmations -
the reorg risks only exists for people not waiting for standard
finalization. So I don't think we should consider that aspect of a
sponsorship transaction that can only be mined with the transaction it
sponsors to be a problem unless a specific practical problem case can be
identified. Even if a significant such case was identified, an easy
solution would be to simply allow sponsorship transactions to be mined on
or after the sponsored transaction is mined.
On Mon, Feb 14, 2022 at 7:10 PM Antoine Riard via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> > In the context of fee bumping, I don't see how this is a criticism
> > unique to transaction sponsors, since it also applies to CPFP: if you
> > tried to bump fees for transaction A with child txn B, if some mempool
> > hasn't seen parent A, it will reject B.
>
> Agree, it's a comment raising the shenanigans of tx-diff-only propagation,
> afaict affecting equally all fee-bumping primitives. It wasn't a criticism
> specific to transaction sponsors, as at that point of your post, sponsors
> are not introduced yet.
>
> > This still doesn't address the issue I'm talking about, which is if you
> > pre-commit to some "fee-bumping" key in your CPFP outputs and that key
> > ends up being compromised. This isn't a matter of data availability or
> > redundancy.
>
> I'm not sure about the real safety risk of the compromise of the anchor
> output key. Of course, if your anchor output key is compromised and the
> bumped package is already public/known, an attacker can extend your package
> with junk to neutralize your carve-out capability (I think). That said,
> this issue sounds solved to me with package relay, as you can always
> broadcast a new version of the package from the root UTXO, without
> attention to the carve-out limitation.
>
> (Side-note: I think we can slowly deprecate the carve-out once package
> relay is deployed, as the fee-bumping flexibility of the latter is a
> superset of the former).
>
> > As I mentioned in the reply to Matt's message, I'm not quite
> > understanding this idea of wanting to bump the fee for something
> > without knowing what it is; that doesn't make much sense to me.
> > The "bump fee" operation seems contingent on knowing
> > what you want to bump.
>
> From your post : "No rebroadcast (wasted bandwidth) is required for the
> original txn data."
>
> I'm objecting to that supposed benefit of a transaction sponsor. If you
> have transaction X and transaction Y spending the same UTXO, both of them
> can be defined as "the original txn data". If you wish to fee-bump
> transaction X with sponsor, how can you be sure that transaction
> Y isn't present in the majority of network nodes, and X has _not_ been
> dropped since your last broadcast ? Otherwise iirc sponsor design, your
> sponsor transaction is going to be rejected.
>
> I think you can't, and thus preventively you should broadcast as a (new
> type) of package the sponsoring/sponsored transaction.
>
> That said, I'm not sure if that issue is equally affecting vaults than
> payment channels. With vaults, the tree of transactions is known ahead,
> and there is no competition in the spends. Assuming the first broadcast has
> been efficient (and it could be a reasonable assumption thanks to mempool
> rebroadcast), the sponsor should propagate.
>
> So I think here for the sake of sponsor efficiency analysis, we might have
> to class between the protocol with once-for-all-transaction-negotiation
> (vaults) and the ones with off-chain, dynamic re-negotiation (payment
> channels, factories) ?
>
> > I'm not familiar with the L2 dust-limit issues, and I do think that
> > "fixing" RBF behavior is *probably* worthwhile.
>
> Sadly, it sounds that "fixing" RBF behavior is a requirement to eradicate
> the most advanced pinnings... That fix is independent of the fee-bumping
> primitive considered.
>
> > Those issues aside, I
> > think the transaction sponsors idea may be closer to a silver bullet
> > than you're giving it credit for, because designing specifically for the
> > fee-management use case has some big benefits.
>
> I don't deny the scheme is interesting, though I would argue SIGHASH_GROUP
> is more efficient, while offering more flexibility. In any case, I think we
> should still pursue further the collections of problems and requirements
> (batching, key management, ...) that new fee-bumping primitives should aim
> to solve, before engaging more on the deployment of one of them [0].
>
> [0] In that sense see
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-May/019031.html
>
> Le lun. 14 févr. 2022 à 15:29, James O'Beirne <james.obeirne@gmail.com> a
> écrit :
>
>> Thanks for your thoughtful reply Antoine.
>>
>> > In a distributed system such as the Bitcoin p2p network, you might
>> > have transaction A and transaction B broadcast at the same time and
>> > your peer topology might fluctuate between original send and
>> > broadcast of the diff, you don't know who's seen what... You might
>> > inefficiently announce diff A on top of B and diff B on top A. We
>> > might leverage set reconciliation there a la Erlay, though likely
>> > with increased round-trips.
>>
>> In the context of fee bumping, I don't see how this is a criticism
>> unique to transaction sponsors, since it also applies to CPFP: if you
>> tried to bump fees for transaction A with child txn B, if some mempool
>> hasn't seen parent A, it will reject B.
>>
>> > Have you heard about SIGHASH_GROUP [0] ?
>>
>> I haven't - I'll spend some time reviewing this. Thanks.
>>
>> > > [me complaining CPFP requires lock-in to keys]
>> >
>> > It's true it requires to pre-specify the fee-bumping key. Though note
>> > the fee-bumping key can be fully separated from the
>> > "vaults"/"channels" set of main keys and hosted on replicated
>> > infrastructure such as watchtowers.
>>
>> This still doesn't address the issue I'm talking about, which is if you
>> pre-commit to some "fee-bumping" key in your CPFP outputs and that key
>> ends up being compromised. This isn't a matter of data availability or
>> redundancy.
>>
>> Note that this failure may be unique to vault use cases, when you're
>> pre-generating potentially large numbers of transactions or covenants
>> that cannot be altered after the fact. If you generate vault txns that
>> assume the use of some key for CPFP-based fee bumping and that key
>> winds up being compromised, that puts you in a an uncomfortable
>> situation: you can no longer bump fees on unvaulting transactions,
>> rendering the vaults possibly unretrievable depending on the fee market.
>>
>> > As a L2 transaction issuer you can't be sure the transaction you wish
>> > to point to is already in the mempool, or have not been replaced by
>> > your counterparty spending the same shared-utxo, either competitively
>> > or maliciously. So as a measure of caution, you should broadcast
>> > sponsor + target transactions in the same package, thus cancelling
>> > the bandwidth saving (I think).
>>
>> As I mentioned in the reply to Matt's message, I'm not quite
>> understanding this idea of wanting to bump the fee for something
>> without knowing what it is; that doesn't make much sense to me.
>> The "bump fee" operation seems contingent on knowing
>> what you want to bump.
>>
>> And if you're, say, trying to broadcast a lightning channel close and
>> you know you need to bump the fee right away, before even broadcasting
>> it, either you're going to
>>
>> - reformulate the txn to bring up the fee rate (e.g. add inputs
>> with some yet-undeployed sighash) as you would have done with RBF, or
>>
>> - you'd have the same "package relay" problem with CPFP that you
>> would with transaction sponsors.
>>
>> So I don't understand the objection here.
>>
>> Also, I didn't mean to discourage existing work on package relay or
>> fixing RBF, which seem clearly important. Maybe I should have noted
>> that explicitly in the original message
>>
>> > I don't think a sponsor is a silver-bullet to solve all the
>> > L2-related mempool issues. It won't solve the most concerning pinning
>> > attacks, as I think the bottleneck is replace-by-fee. Neither solve
>> > the issues encumbered by the L2s by the dust limit.
>>
>> I'm not familiar with the L2 dust-limit issues, and I do think that
>> "fixing" RBF behavior is *probably* worthwhile. Those issues aside, I
>> think the transaction sponsors idea may be closer to a silver bullet
>> than you're giving it credit for, because designing specifically for the
>> fee-management use case has some big benefits.
>>
>> For one, it makes migration easier. That is to say: there is none,
>> whereas there is existing RBF policy that needs consideration.
>>
>> But maybe more importantly, transaction sponsors' limited use case also
>> allows for specifying much more targeted "replacement" policy since
>> sponsors are special-purpose transactions that only exist to
>> dynamically bump feerate. E.g. my SIGHASH_{NONE,SINGLE}|ANYONECANPAY
>> proposal might make complete sense for the sponsors/fee-management use
>> case, and clarify the replacement problem, but obviously wouldn't work
>> for more general transaction replacement. In other words, RBF's
>> general nature might make it a much harder problem to solve well.
>>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 12677 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [bitcoin-dev] Thoughts on fee bumping
2022-02-14 20:28 ` James O'Beirne
@ 2022-02-15 0:43 ` Antoine Riard
2022-02-15 17:09 ` Billy Tetrud
0 siblings, 1 reply; 25+ messages in thread
From: Antoine Riard @ 2022-02-15 0:43 UTC (permalink / raw)
To: James O'Beirne; +Cc: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 8687 bytes --]
> In the context of fee bumping, I don't see how this is a criticism
> unique to transaction sponsors, since it also applies to CPFP: if you
> tried to bump fees for transaction A with child txn B, if some mempool
> hasn't seen parent A, it will reject B.
Agree, it's a comment raising the shenanigans of tx-diff-only propagation,
afaict affecting equally all fee-bumping primitives. It wasn't a criticism
specific to transaction sponsors, as at that point of your post, sponsors
are not introduced yet.
> This still doesn't address the issue I'm talking about, which is if you
> pre-commit to some "fee-bumping" key in your CPFP outputs and that key
> ends up being compromised. This isn't a matter of data availability or
> redundancy.
I'm not sure about the real safety risk of the compromise of the anchor
output key. Of course, if your anchor output key is compromised and the
bumped package is already public/known, an attacker can extend your package
with junk to neutralize your carve-out capability (I think). That said,
this issue sounds solved to me with package relay, as you can always
broadcast a new version of the package from the root UTXO, without
attention to the carve-out limitation.
(Side-note: I think we can slowly deprecate the carve-out once package
relay is deployed, as the fee-bumping flexibility of the latter is a
superset of the former).
> As I mentioned in the reply to Matt's message, I'm not quite
> understanding this idea of wanting to bump the fee for something
> without knowing what it is; that doesn't make much sense to me.
> The "bump fee" operation seems contingent on knowing
> what you want to bump.
From your post : "No rebroadcast (wasted bandwidth) is required for the
original txn data."
I'm objecting to that supposed benefit of a transaction sponsor. If you
have transaction X and transaction Y spending the same UTXO, both of them
can be defined as "the original txn data". If you wish to fee-bump
transaction X with sponsor, how can you be sure that transaction
Y isn't present in the majority of network nodes, and X has _not_ been
dropped since your last broadcast ? Otherwise iirc sponsor design, your
sponsor transaction is going to be rejected.
I think you can't, and thus preventively you should broadcast as a (new
type) of package the sponsoring/sponsored transaction.
That said, I'm not sure if that issue is equally affecting vaults than
payment channels. With vaults, the tree of transactions is known ahead,
and there is no competition in the spends. Assuming the first broadcast has
been efficient (and it could be a reasonable assumption thanks to mempool
rebroadcast), the sponsor should propagate.
So I think here for the sake of sponsor efficiency analysis, we might have
to class between the protocol with once-for-all-transaction-negotiation
(vaults) and the ones with off-chain, dynamic re-negotiation (payment
channels, factories) ?
> I'm not familiar with the L2 dust-limit issues, and I do think that
> "fixing" RBF behavior is *probably* worthwhile.
Sadly, it sounds that "fixing" RBF behavior is a requirement to eradicate
the most advanced pinnings... That fix is independent of the fee-bumping
primitive considered.
> Those issues aside, I
> think the transaction sponsors idea may be closer to a silver bullet
> than you're giving it credit for, because designing specifically for the
> fee-management use case has some big benefits.
I don't deny the scheme is interesting, though I would argue SIGHASH_GROUP
is more efficient, while offering more flexibility. In any case, I think we
should still pursue further the collections of problems and requirements
(batching, key management, ...) that new fee-bumping primitives should aim
to solve, before engaging more on the deployment of one of them [0].
[0] In that sense see
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-May/019031.html
Le lun. 14 févr. 2022 à 15:29, James O'Beirne <james.obeirne@gmail.com> a
écrit :
> Thanks for your thoughtful reply Antoine.
>
> > In a distributed system such as the Bitcoin p2p network, you might
> > have transaction A and transaction B broadcast at the same time and
> > your peer topology might fluctuate between original send and
> > broadcast of the diff, you don't know who's seen what... You might
> > inefficiently announce diff A on top of B and diff B on top A. We
> > might leverage set reconciliation there a la Erlay, though likely
> > with increased round-trips.
>
> In the context of fee bumping, I don't see how this is a criticism
> unique to transaction sponsors, since it also applies to CPFP: if you
> tried to bump fees for transaction A with child txn B, if some mempool
> hasn't seen parent A, it will reject B.
>
> > Have you heard about SIGHASH_GROUP [0] ?
>
> I haven't - I'll spend some time reviewing this. Thanks.
>
> > > [me complaining CPFP requires lock-in to keys]
> >
> > It's true it requires to pre-specify the fee-bumping key. Though note
> > the fee-bumping key can be fully separated from the
> > "vaults"/"channels" set of main keys and hosted on replicated
> > infrastructure such as watchtowers.
>
> This still doesn't address the issue I'm talking about, which is if you
> pre-commit to some "fee-bumping" key in your CPFP outputs and that key
> ends up being compromised. This isn't a matter of data availability or
> redundancy.
>
> Note that this failure may be unique to vault use cases, when you're
> pre-generating potentially large numbers of transactions or covenants
> that cannot be altered after the fact. If you generate vault txns that
> assume the use of some key for CPFP-based fee bumping and that key
> winds up being compromised, that puts you in a an uncomfortable
> situation: you can no longer bump fees on unvaulting transactions,
> rendering the vaults possibly unretrievable depending on the fee market.
>
> > As a L2 transaction issuer you can't be sure the transaction you wish
> > to point to is already in the mempool, or have not been replaced by
> > your counterparty spending the same shared-utxo, either competitively
> > or maliciously. So as a measure of caution, you should broadcast
> > sponsor + target transactions in the same package, thus cancelling
> > the bandwidth saving (I think).
>
> As I mentioned in the reply to Matt's message, I'm not quite
> understanding this idea of wanting to bump the fee for something
> without knowing what it is; that doesn't make much sense to me.
> The "bump fee" operation seems contingent on knowing
> what you want to bump.
>
> And if you're, say, trying to broadcast a lightning channel close and
> you know you need to bump the fee right away, before even broadcasting
> it, either you're going to
>
> - reformulate the txn to bring up the fee rate (e.g. add inputs
> with some yet-undeployed sighash) as you would have done with RBF, or
>
> - you'd have the same "package relay" problem with CPFP that you
> would with transaction sponsors.
>
> So I don't understand the objection here.
>
> Also, I didn't mean to discourage existing work on package relay or
> fixing RBF, which seem clearly important. Maybe I should have noted
> that explicitly in the original message
>
> > I don't think a sponsor is a silver-bullet to solve all the
> > L2-related mempool issues. It won't solve the most concerning pinning
> > attacks, as I think the bottleneck is replace-by-fee. Neither solve
> > the issues encumbered by the L2s by the dust limit.
>
> I'm not familiar with the L2 dust-limit issues, and I do think that
> "fixing" RBF behavior is *probably* worthwhile. Those issues aside, I
> think the transaction sponsors idea may be closer to a silver bullet
> than you're giving it credit for, because designing specifically for the
> fee-management use case has some big benefits.
>
> For one, it makes migration easier. That is to say: there is none,
> whereas there is existing RBF policy that needs consideration.
>
> But maybe more importantly, transaction sponsors' limited use case also
> allows for specifying much more targeted "replacement" policy since
> sponsors are special-purpose transactions that only exist to
> dynamically bump feerate. E.g. my SIGHASH_{NONE,SINGLE}|ANYONECANPAY
> proposal might make complete sense for the sponsors/fee-management use
> case, and clarify the replacement problem, but obviously wouldn't work
> for more general transaction replacement. In other words, RBF's
> general nature might make it a much harder problem to solve well.
>
[-- Attachment #2: Type: text/html, Size: 9704 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [bitcoin-dev] Thoughts on fee bumping
2022-02-11 5:26 ` Antoine Riard
@ 2022-02-14 20:28 ` James O'Beirne
2022-02-15 0:43 ` Antoine Riard
0 siblings, 1 reply; 25+ messages in thread
From: James O'Beirne @ 2022-02-14 20:28 UTC (permalink / raw)
To: Antoine Riard; +Cc: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 4363 bytes --]
Thanks for your thoughtful reply Antoine.
> In a distributed system such as the Bitcoin p2p network, you might
> have transaction A and transaction B broadcast at the same time and
> your peer topology might fluctuate between original send and
> broadcast of the diff, you don't know who's seen what... You might
> inefficiently announce diff A on top of B and diff B on top A. We
> might leverage set reconciliation there a la Erlay, though likely
> with increased round-trips.
In the context of fee bumping, I don't see how this is a criticism
unique to transaction sponsors, since it also applies to CPFP: if you
tried to bump fees for transaction A with child txn B, if some mempool
hasn't seen parent A, it will reject B.
> Have you heard about SIGHASH_GROUP [0] ?
I haven't - I'll spend some time reviewing this. Thanks.
> > [me complaining CPFP requires lock-in to keys]
>
> It's true it requires to pre-specify the fee-bumping key. Though note
> the fee-bumping key can be fully separated from the
> "vaults"/"channels" set of main keys and hosted on replicated
> infrastructure such as watchtowers.
This still doesn't address the issue I'm talking about, which is if you
pre-commit to some "fee-bumping" key in your CPFP outputs and that key
ends up being compromised. This isn't a matter of data availability or
redundancy.
Note that this failure may be unique to vault use cases, when you're
pre-generating potentially large numbers of transactions or covenants
that cannot be altered after the fact. If you generate vault txns that
assume the use of some key for CPFP-based fee bumping and that key
winds up being compromised, that puts you in a an uncomfortable
situation: you can no longer bump fees on unvaulting transactions,
rendering the vaults possibly unretrievable depending on the fee market.
> As a L2 transaction issuer you can't be sure the transaction you wish
> to point to is already in the mempool, or have not been replaced by
> your counterparty spending the same shared-utxo, either competitively
> or maliciously. So as a measure of caution, you should broadcast
> sponsor + target transactions in the same package, thus cancelling
> the bandwidth saving (I think).
As I mentioned in the reply to Matt's message, I'm not quite
understanding this idea of wanting to bump the fee for something
without knowing what it is; that doesn't make much sense to me.
The "bump fee" operation seems contingent on knowing
what you want to bump.
And if you're, say, trying to broadcast a lightning channel close and
you know you need to bump the fee right away, before even broadcasting
it, either you're going to
- reformulate the txn to bring up the fee rate (e.g. add inputs
with some yet-undeployed sighash) as you would have done with RBF, or
- you'd have the same "package relay" problem with CPFP that you
would with transaction sponsors.
So I don't understand the objection here.
Also, I didn't mean to discourage existing work on package relay or
fixing RBF, which seem clearly important. Maybe I should have noted
that explicitly in the original message
> I don't think a sponsor is a silver-bullet to solve all the
> L2-related mempool issues. It won't solve the most concerning pinning
> attacks, as I think the bottleneck is replace-by-fee. Neither solve
> the issues encumbered by the L2s by the dust limit.
I'm not familiar with the L2 dust-limit issues, and I do think that
"fixing" RBF behavior is *probably* worthwhile. Those issues aside, I
think the transaction sponsors idea may be closer to a silver bullet
than you're giving it credit for, because designing specifically for the
fee-management use case has some big benefits.
For one, it makes migration easier. That is to say: there is none,
whereas there is existing RBF policy that needs consideration.
But maybe more importantly, transaction sponsors' limited use case also
allows for specifying much more targeted "replacement" policy since
sponsors are special-purpose transactions that only exist to
dynamically bump feerate. E.g. my SIGHASH_{NONE,SINGLE}|ANYONECANPAY
proposal might make complete sense for the sponsors/fee-management use
case, and clarify the replacement problem, but obviously wouldn't work
for more general transaction replacement. In other words, RBF's
general nature might make it a much harder problem to solve well.
[-- Attachment #2: Type: text/html, Size: 5002 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [bitcoin-dev] Thoughts on fee bumping
2022-02-11 0:12 ` Matt Corallo
@ 2022-02-14 19:51 ` James O'Beirne
2022-02-17 14:32 ` Anthony Towns
1 sibling, 0 replies; 25+ messages in thread
From: James O'Beirne @ 2022-02-14 19:51 UTC (permalink / raw)
To: Matt Corallo; +Cc: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 2108 bytes --]
> This entirely misses the network cost. Yes, sure, we can send
> "diffs", but if you send enough diffs eventually you send a lot of data.
The whole point of that section of the email was to consider the
network cost. There are many cases for which transmitting a
supplementary 1-in-1-out transaction (i.e. a sponsorship txn) is going
to be more efficient from a bandwidth standpoint than rebroadcasting a
potentially large txn during RBF.
> > In an ideal design, special structural foresight would not be
> > needed in order for a txn's feerate to be improved after broadcast.
> >
> > Anchor outputs specified solely for CPFP, which amount to many
> > bytes of wasted chainspace, are a hack. > It's probably
> > uncontroversial at this
>
> This has nothing to do with fee bumping, though, this is only solved
> with covenants or something in that direction, not different relay
> policy.
My post isn't only about relay policy; it's that txn
sponsors allows for fee-bumping in cases where RBF isn't possible and
CPFP would be wasteful, e.g. for a tree of precomputed vault
transactions or - maybe more generally - certain kinds of
covenants.
> How does this not also fail your above criteria of not wasting block
> space?
In certain cases (e.g. vault structures), using sponsorship txns to
bump fees as-needed is more blockspace-efficient than including
mostly-unused CPFP "anchor" outputs that pay to fee-management wallets.
I'm betting there are other similar cases where CPFP anchors are
included but not necessarily used, and amount to wasted blockspace.
> Further, this doesn't solve pinning attacks at all. In lightning we
> want to be able to *replace* something in the mempool (or see it
> confirm soon, but that assumes we know exactly what transaction is in
> "the" mempool). Just being able to sponsor something doesn't help if
> you don't know what that thing is.
When would you be trying to bump the fee on a transaction without
knowing what it is? Seeing a specific transaction "stuck" in the
mempool seems to be a prerequisite to bumping fees. I'm not sure what
you're getting at here.
[-- Attachment #2: Type: text/html, Size: 2444 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [bitcoin-dev] Thoughts on fee bumping
2022-02-11 6:51 ` darosior
@ 2022-02-12 19:44 ` Billy Tetrud
0 siblings, 0 replies; 25+ messages in thread
From: Billy Tetrud @ 2022-02-12 19:44 UTC (permalink / raw)
To: darosior, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 5400 bytes --]
With respect to the disagreement/misunderstanding about the "<1vMB in the
mempool" case, I think it's important to be clear about what the goals of
relay policy are. Should the goal be to only relay transactions that
increase miner revenue? Sure ideally, because we want to minimize load on
the network. But practically, getting that goal 100% probably involves
tradeoffs of diminishing returns.
The only way to ensure that a transaction is only relayed when it increases
miner revenue is to make relay rules exactly match miner inclusion rules.
And since we don't want to (nor can we) force miners to do transaction
inclusion the same as each other, we certainly can't realistically produce
an environment where relay rules exactly match miner inclusion rules.
So I think the goal should *not *be strictly minimal relay, because it's
not practical and basically not even possible. Instead the goal should be
some close-enough approach.
This relates to the "<1vMB in the mempool" case because the disagreement
seems to be related to what trade offs to make. A simple rule that the
fee-rate must be bumped by at least X satoshi would indeed allow the
scenario darosior describes, where someone can broadcast one large
low-feerate transaction and then subsequently broadcast smaller but
higher-feerate transactions. The question is: is that really likely be a
problem? This can be framed by considering a couple cases:
* The average case
* The adversarial worst case
In the average case, no one is going to be broadcasting any transactions
like that because they don't need to. So in the average case, that scenario
can be ignored. In the adversarial case however, some large actor that
sends lots of transactions could spam the network any time blockchain
congestion. What's the worst someone could do?
Well if there's really simply not enough transactions to even fill the
block, without an absolute-fee bump requirement, a malicious actor could
create a lot of spam. To the tune of over 8000 transactions (assuming a 1
sat/vb relay rule) for an empty mempool where the malicious actor sends a
2MB transaction with a 1 sat/vb fee, then a 1MB transaction with a 2
sat/vb, then 666KB transaction for 3 sat/vb etc. But in considering that
this transaction would already take up the entire block, it would be just
as effective for an attacker to send 8000 minimal sized transactions and
have them relayed. So this method of attack doesn't gain the attacker any
additional power to spam the network. Not to mention that nodes should be
easily able to handle that load, so there's not much of an actual "attack"
happening here. Just an insignificant amount of avoidable extra spent
electricity and unnecessary internet traffic. Nothing that's going to make
running a full node any harder.
And in the case that there *are* enough transactions to fill the block
(which I think is the normal case, and it really should become a rarity for
this not to the case in the future), higher feerate transactions are always
better unless you already overpaid for fees. Sure you can overpay and then
add some spam by making successively higher feerate but smaller
transactions, but in that case you've basically paid for all that spam up
front with your original fee. So is it really spam? If you've covered the
cost of it, then its not spam as much as it is stupid behavior.
So I'm inclined to agree with O'Beirne (and Lisa Neigut) that valid
transactions with feerate bumps should never be excluded from relay as long
as the amount of the feerate bump is more than the node's minimum
transaction fee. Doing that would also get rid of the spectre of
transaction pinning.
*I'm curious if there's some other type of scenario where removing the
absolute fee bump rule would cause nodes to relay more transactions than
they would relay in a full/congested mempool scenario*. We shouldn't care
about spam that only happens when the network is quiet and can't bring
network traffic above normal non-quiet loads because a case like that isn't
a dos risk.
On Fri, Feb 11, 2022 at 3:13 AM darosior via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> Well because in the example i gave you this decreases the miner's reward.
> The rule of increasing feerate you stated isn't always economically
> rationale.
>
>
> Note how it can also be extended, for instance if the miner only has
> 1.5vMB of txs and is not assured to receive enough transactions to fill 2
> blocks he might be interested in maximizing absolute fees, not feerate.
>
>
> Sure, we could make the argument that long term we need a large backlog of
> transactions anyways.. But that'd be unfortunately not in phase with
> today's reality.
>
>
> -------- Original Message --------
> On Feb 11, 2022, 00:51, James O'Beirne < james.obeirne@gmail.com> wrote:
>
>
> > It's not that simple. As a miner, if i have less than 1vMB of
> transactions in my mempool. I don't want a 10sats/vb transaction paying
> 100000sats by a 100sats/vb transaction paying only 10000sats.
>
> I don't understand why the "<1vMB in the mempool" case is even worth
> consideration because the miner will just include the entire mempool in the
> next block regardless of feerate.
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
[-- Attachment #2: Type: text/html, Size: 6512 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [bitcoin-dev] Thoughts on fee bumping
2022-02-10 23:51 ` James O'Beirne
@ 2022-02-11 6:51 ` darosior
2022-02-12 19:44 ` Billy Tetrud
0 siblings, 1 reply; 25+ messages in thread
From: darosior @ 2022-02-11 6:51 UTC (permalink / raw)
To: james.obeirne; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 995 bytes --]
Well because in the example i gave you this decreases the miner's reward. The rule of increasing feerate you stated isn't always economically rationale.
Note how it can also be extended, for instance if the miner only has 1.5vMB of txs and is not assured to receive enough transactions to fill 2 blocks he might be interested in maximizing absolute fees, not feerate.
Sure, we could make the argument that long term we need a large backlog of transactions anyways.. But that'd be unfortunately not in phase with today's reality.
-------- Original Message --------
On Feb 11, 2022, 00:51, James O'Beirne wrote:
>> It's not that simple. As a miner, if i have less than 1vMB of transactions in my mempool. I don't want a 10sats/vb transaction paying 100000sats by a 100sats/vb transaction paying only 10000sats.
>
> I don't understand why the "<1vMB in the mempool" case is even worth consideration because the miner will just include the entire mempool in the next block regardless of feerate.
[-- Attachment #2: Type: text/html, Size: 1230 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [bitcoin-dev] Thoughts on fee bumping
2022-02-10 19:40 James O'Beirne
` (2 preceding siblings ...)
2022-02-11 0:12 ` Matt Corallo
@ 2022-02-11 5:26 ` Antoine Riard
2022-02-14 20:28 ` James O'Beirne
3 siblings, 1 reply; 25+ messages in thread
From: Antoine Riard @ 2022-02-11 5:26 UTC (permalink / raw)
To: James O'Beirne, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 17071 bytes --]
Hi James,
I fully agree on the need to reframe the conversation around
mempools/fee-bumping/L2s though please see my following comments, it's far
from simple!
> Layering on special cases, more carve-outs, and X and Y percentage
> thresholds is going to make reasoning about the mempool harder than it
> already is.
I think that's true with a lot of (all ?) pieces of software, there is a
trend towards complexification. As new Bitcoin use-cases such as LN or
vaults appear, it's not surprising to see the base layer upper interfaces
changing to support the requirements. Same with kernels, at beginning, you
can have a basic memory support with paging/memory rights/kernel allocators
then as you start to support more platforms/devices you might have to
support swaps/DMA/VMs management...
That we should keep the complexity reasonably sane to enable human
auditing, and maybe learn from the failures of systems engineering, that's
something to muse on.
> The countervailing force here ends up being spam prevention (a la
min-relay-fee)
> to prevent someone from consuming bandwidth and mempool space with a long
series of
> infinitesimal fee-bumps.
I think here we should dissociate a) long-chain of transactions and b)
high-number of repeated fee-bumps.
For a) _if_ SIGHASH_ANYPREVOUT is deployed and Eltoo adopted as a primary
update mechanism for stateful L2s, one might envision long-chain of update
transactions servicing as a new pinning vector, where all the chain
elements are offering a compelling feerate/fees. It might be solvable with
smarter mempool logic sorting the elements from the best offer to the lower
ones, though that issue would need more serious investigation.
For b) if we bound with a hard constant the number of RBF attempts, we
decrease the fault-tolerance of L2 transaction issuers. Some of them might
connect directly to the miners because they're offering higher number of
incentive-compatible RBF attempts than vanilla full-nodes. That might
provoke a more or slow centralization of the transaction relay topology...
> Instead of prompting a rebroadcast of the original transaction for
> replacement, which contains a lot of data not new to the network, it
> makes more sense to broadcast the "diff" which is the additive
> contribution towards some txn's feerate.
In a distributed system such as the Bitcoin p2p network, you might have
transaction A and transaction B broadcast at the same time and your peer
topology might fluctuate between original send and broadcast
of the diff, you don't know who's seen what... You might inefficiently
announce diff A on top of B and diff B on top A. We might leverage set
reconciliation there a la Erlay, though likely with increased round-trips.
> It's probably uncontroversial at this
> point to say that even RBF itself is kind of a hack - a special
> sequence number should not be necessary for post-broadcast contribution
> toward feerate.
I think here we should dissociate the replace-by-fee mechanism itself from
the replacement signaling one. To have a functional RBF, you don't need
signaling at all, just consider all received transactions as replaceable.
The replacement signaling one has been historically motivated to protect
the applications relying on zero-confs (with all the past polemics about
the well-foundedness of such claims on other nodes' policy).
> In a sane design, no structural foresight - and certainly no wasted
>bytes in the form of unused anchor outputs - should be needed in order
>to add to a miner's reward for confirming a given transaction.
Have you heard about SIGHASH_GROUP [0] ? It would move away from the
transaction to enable arbitrary bundles of input/outputs. You will have
your pre-signed bundle of inputs/outputs enforcing your LN/vaults/L2 and
then at broadcast time, you can attach an input/output. I think it would
answer your structural foresight.
> One of the practical downsides of CPFP that I haven't seen discussed in
> this conversation is that it requires the transaction to pre-specify the
> keys needed to sign for fee bumps. This is problematic if you're, for
> example, using a vault structure that makes use of pre-signed
> transactions.
It's true it requires to pre-specify the fee-bumping key. Though note the
fee-bumping key can be fully separated from the "vaults"/"channels" set of
main keys and hosted on replicated infrastructure such as watchtowers.
> The interface for end-users is very straightforward: if you want to bump
> fees, specify a transaction that contributes incrementally to package
> feerate for some txid. Simple.
As a L2 transaction issuer you can't be sure the transaction you wish to
point to is already in the mempool, or have not been replaced by your
counterparty spending the same shared-utxo, either competitively or
maliciously. So as a measure of caution, you should broadcast sponsor +
target transactions in the same package, thus cancelling the bandwidth
saving (I think).
> This theoretical concession seems preferable to heaping more rules onto
an already labyrinthine mempool policy that is difficult for both
implementers and users to reason about practically and conceptually.
I don't think a sponsor is a silver-bullet to solve all the L2-related
mempool issues. It won't solve the most concerning pinning attacks, as I
think the bottleneck is replace-by-fee. Neither solve the issues encumbered
by the L2s by the dust limit.
> If a soft-fork is the cost of cleaning up this essential process,
> consideration should be given to paying it as a one-time cost. This
> topic merits a separate post, but consider that in the 5 years leading
> up to the 2017 SegWit drama, we averaged about a soft-fork a year.
> Uncontroversial, "safe" changes to the consensus protocol shouldn't be
> out of the question when significant practical benefit is plain to see.
Zooming out, I think we're still early in solving those L2 issues, as the
most major second-layers are still in a design or deployment phase. We
might freeze our transaction propagation interface, and get short for some
of the most interesting ones like channel factories and payment pools.
Further, I think we're not entirely sure how the mining ecosystem is going
to behave once the reward is drained and their incentives towards L2
confirmations.
Still, if we think we have a correct picture of the fee-bumping/mempools
issues and are sufficiently confident with the stability of L2 designs, I
think the next step would be to come with quantitative modelling of each
resources consumed by fee-bumping (CPU validation/bandwidth/signing
interactivity for the L2s...) and then score the "next-gen" fee-bumping
primitives.
> I'm not out to propose soft-forks lightly, but the current complexity
> in fee management feels untenable, and as evidenced by all the
> discussion lately, fees are an increasingly crucial part of the system.
Overall, I think that's a relevant discussion to have ecosystem-wise.
Though there is a lot of context and I don't think there is a simple way
forward. Maybe better to stick to an evolutionary development process with
those mempool/fee-bumping issues. We might envision two-or-three steps
ahead though unlikely more.
Cheers,
Antoine
[0] SIGHASH_GROUP described here
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-May/019031.html
and roughly roughly implemented here :
https://github.com/ariard/bitcoin/pull/1
Le jeu. 10 févr. 2022 à 14:48, James O'Beirne via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :
> There's been much talk about fee-bumping lately, and for good reason -
> dynamic fee management is going to be a central part of bitcoin use as
> the mempool fills up (lord willing) and right now fee-bumping is
> fraught with difficulty and pinning peril.
>
> Gloria's recent post on the topic[0] was very lucid and highlights a
> lot of the current issues, as well as some proposals to improve the
> situation.
>
> As others have noted, the post was great. But throughout the course
> of reading it and the ensuing discussion, I became troubled by the
> increasing complexity of both the status quo and some of the
> proposed remedies.
>
> Layering on special cases, more carve-outs, and X and Y percentage
> thresholds is going to make reasoning about the mempool harder than it
> already is. Special consideration for "what should be in the next
> block" and/or the caching of block templates seems like an imposing
> dependency, dragging in a bunch of state and infrastructure to a
> question that should be solely limited to mempool feerate aggregates
> and the feerate of the particular txn package a wallet is concerned
> with.
>
> This is bad enough for protocol designers and Core developers, but
> making the situation any more intractable for "end-users" and wallet
> developers feels wrong.
>
> I thought it might be useful to step back and reframe. Here are a few
> aims that are motivated chiefly by the quality of end-user experience,
> constrained to obey incentive compatibility (i.e. miner reward, DoS
> avoidance). Forgive the abstract dalliance for a moment; I'll talk
> through concretes afterwards.
>
>
> # Purely additive feerate bumps should never be impossible
>
> Any user should always be able to add to the incentive to mine any
> transaction in a purely additive way. The countervailing force here
> ends up being spam prevention (a la min-relay-fee) to prevent someone
> from consuming bandwidth and mempool space with a long series of
> infinitesimal fee-bumps.
>
> A fee bump, naturally, should be given the same per-byte consideration
> as a normal Bitcoin transaction in terms of relay and block space,
> although it would be nice to come up with a more succinct
> representation. This leads to another design principle:
>
>
> # The bandwidth and chain space consumed by a fee-bump should be minimal
>
> Instead of prompting a rebroadcast of the original transaction for
> replacement, which contains a lot of data not new to the network, it
> makes more sense to broadcast the "diff" which is the additive
> contribution towards some txn's feerate.
>
> This dovetails with the idea that...
>
>
> # Special transaction structure should not be required to bump fees
>
> In an ideal design, special structural foresight would not be needed
> in order for a txn's feerate to be improved after broadcast.
>
> Anchor outputs specified solely for CPFP, which amount to many bytes of
> wasted chainspace, are a hack. It's probably uncontroversial at this
> point to say that even RBF itself is kind of a hack - a special
> sequence number should not be necessary for post-broadcast contribution
> toward feerate. Not to mention RBF's seemingly wasteful consumption of
> bandwidth due to the rebroadcast of data the network has already seen.
>
> In a sane design, no structural foresight - and certainly no wasted
> bytes in the form of unused anchor outputs - should be needed in order
> to add to a miner's reward for confirming a given transaction.
>
> Planning for fee-bumps explicitly in transaction structure also often
> winds up locking in which keys are required to bump fees, at odds
> with the idea that...
>
>
> # Feerate bumps should be able to come from anywhere
>
> One of the practical downsides of CPFP that I haven't seen discussed in
> this conversation is that it requires the transaction to pre-specify the
> keys needed to sign for fee bumps. This is problematic if you're, for
> example, using a vault structure that makes use of pre-signed
> transactions.
>
> What if the key you specified n the anchor outputs for a bunch of
> pre-signed txns is compromised? What if you'd like to be able to
> dynamically select the wallet that bumps fees? CPFP does you no favors
> here.
>
> There is of course a tension between allowing fee bumps to come from
> anywhere and the threat of pinning-like attacks. So we should venture
> to remove pinning as a possibility, in line with the first design
> principle I discuss.
>
>
> ---
>
> Coming down to earth, the "tabula rasa" thought experiment above has led
> me to favor an approach like the transaction sponsors design that Jeremy
> proposed in a prior discussion back in 2020[1].
>
> Transaction sponsors allow feerates to be bumped after a transaction's
> broadcast, regardless of the structure of the original transaction.
> No rebroadcast (wasted bandwidth) is required for the original txn data.
> No wasted chainspace on only-maybe-used prophylactic anchor outputs.
>
> The interface for end-users is very straightforward: if you want to bump
> fees, specify a transaction that contributes incrementally to package
> feerate for some txid. Simple.
>
> In the original discussion, there were a few main objections that I noted:
>
> 1. In Jeremy's original proposal, only one sponsor txn per txid is
> allowed by policy. A malicious actor could execute a pinning-like
> attack by specifying an only-slightly-helpful feerate sponsor that
> then precludes other larger bumps.
>
> I think there are some ways around this shortcoming. For example: what
> if, by policy, sponsor txns had additional constraints that
>
> - each input must be signed {SIGHASH_SINGLE,SIGHASH_NONE}|ANYONECANPAY,
> - the txn must be specified RBFable,
> - a replacement for the sponsor txn must raise the sponsor feerate,
> including ancestors (maybe this is inherent in "is RBFable," but
> I don't want to conflate absolute feerates into this).
>
> That way, there is still at most a single sponsor txn per txid in the
> mempool, but anyone can "mix in" inputs which bump the effective
> feerate of the sponsor.
>
> This may not be the exact solution we want, but I think it demonstrates
> that the sponsors design has some flexibility and merits some thinking.
>
> The second objection about sponsors was
>
> 2. (from Suhas) sponsors break the classic invariant: "once a valid
> transaction is created, it should not become invalid later on unless
> the inputs are double-spent."
>
> This doesn't seem like a huge concern to me if you consider the txid
> being sponsored as a sort of spiritual input to the sponsor. While the
> theoretical objection against broadening where one has to look in a txn
> to determine its dependencies is understandable, I don't see what the
> practical cost here is.
>
> Reorg complexity seems comparable if not identical, especially if we
> broaden sponsor rules to allow blocks to contain sponsor txns that are
> both for txids in the same block _or_ already included in the chain.
>
> This theoretical concession seems preferable to heaping more rules onto
> an already labyrinthine mempool policy that is difficult for both
> implementers and users to reason about practically and conceptually.
>
> A third objection that wasn't posed, IIRC, but almost certainly would
> be:
>
> 3. Transaction sponsors requires a soft-fork.
>
> Soft-forks are no fun, but I'll tell you what also isn't fun: being on
> the hook to model (and sometimes implement) a dizzying potpourri of
> mempool policies and special-cases. Expecting wallet implementers to
> abide by a maze of rules faithfully in order to ensure txn broadcast and
> fee management invites bugs for perpetuity and network behavior that is
> difficult to reason about a priori. Use of CPFP in the long-term also
> risks needless chain waste.
>
> If a soft-fork is the cost of cleaning up this essential process,
> consideration should be given to paying it as a one-time cost. This
> topic merits a separate post, but consider that in the 5 years leading
> up to the 2017 SegWit drama, we averaged about a soft-fork a year.
> Uncontroversial, "safe" changes to the consensus protocol shouldn't be
> out of the question when significant practical benefit is plain to see.
>
> ---
>
> I hope this message has added some framing to the discussion on fees,
> as well prompting other participants to go back and give the
> transaction sponsor proposal a serious look. The sponsors interface is
> about the simplest I can imagine for wallets, and it seems easy to
> reason about for implementers on Core and elsewhere.
>
> I'm not out to propose soft-forks lightly, but the current complexity
> in fee management feels untenable, and as evidenced by all the
> discussion lately, fees are an increasingly crucial part of the system.
>
>
>
> [0]:
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019817.html
> [1]:
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-September/018168.html
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 18600 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [bitcoin-dev] Thoughts on fee bumping
2022-02-10 19:40 James O'Beirne
2022-02-10 23:09 ` Greg Sanders
2022-02-10 23:44 ` darosior
@ 2022-02-11 0:12 ` Matt Corallo
2022-02-14 19:51 ` James O'Beirne
2022-02-17 14:32 ` Anthony Towns
2022-02-11 5:26 ` Antoine Riard
3 siblings, 2 replies; 25+ messages in thread
From: Matt Corallo @ 2022-02-11 0:12 UTC (permalink / raw)
To: James O'Beirne, Bitcoin Protocol Discussion
This is great in theory, but I think it kinda misses *why* the complexity keeps creeping in. We
agree on (most of) the goals here, but the problem is the goals explicitly lead to the complexity,
its not some software engineering failure or imagination failure that leads to the complexity.
On 2/10/22 14:40, James O'Beirne via bitcoin-dev wrote:
-snip-
> # Purely additive feerate bumps should never be impossible
>
> Any user should always be able to add to the incentive to mine any
> transaction in a purely additive way. The countervailing force here
> ends up being spam prevention (a la min-relay-fee) to prevent someone
> from consuming bandwidth and mempool space with a long series of
> infinitesimal fee-bumps.
>
> A fee bump, naturally, should be given the same per-byte consideration
> as a normal Bitcoin transaction in terms of relay and block space,
> although it would be nice to come up with a more succinct
> representation. This leads to another design principle:
This is where *all* the complexity comes from. If our goal is to "ensure a bump increases a miner's
overall revenue" (thus not wasting relay for everyone else), then we precisely *do* need
> Special consideration for "what should be in the next
> block" and/or the caching of block templates seems like an imposing
> dependency
Whether a transaction increases a miner's revenue depends precisely on whether the transaction
(package) being replaced is in the next block - if it is, you care about the absolute fee of the
package and its replacement. If it is not in the next block (or, really, not near a block boundary
or further down in the mempool where you assume other transactions will appear around it over time),
then you care about the fee *rate*, not the fee difference.
> # The bandwidth and chain space consumed by a fee-bump should be minimal
>
> Instead of prompting a rebroadcast of the original transaction for
> replacement, which contains a lot of data not new to the network, it
> makes more sense to broadcast the "diff" which is the additive
> contribution towards some txn's feerate.
This entirely misses the network cost. Yes, sure, we can send "diffs", but if you send enough diffs
eventually you send a lot of data. We cannot simply ignore network-wide costs like total relay
bandwidth (or implementation runtime DoS issues).
> # Special transaction structure should not be required to bump fees
>
> In an ideal design, special structural foresight would not be needed
> in order for a txn's feerate to be improved after broadcast.
>
> Anchor outputs specified solely for CPFP, which amount to many bytes of
> wasted chainspace, are a hack. > It's probably uncontroversial at this
This has nothing to do with fee bumping, though, this is only solved with covenants or something in
that direction, not different relay policy.
> Coming down to earth, the "tabula rasa" thought experiment above has led
> me to favor an approach like the transaction sponsors design that Jeremy
> proposed in a prior discussion back in 2020[1].
How does this not also fail your above criteria of not wasting block space?
Further, this doesn't solve pinning attacks at all. In lightning we want to be able to *replace*
something in the mempool (or see it confirm soon, but that assumes we know exactly what transaction
is in "the" mempool). Just being able to sponsor something doesn't help if you don't know what that
thing is.
Matt
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [bitcoin-dev] Thoughts on fee bumping
2022-02-10 23:44 ` darosior
@ 2022-02-10 23:51 ` James O'Beirne
2022-02-11 6:51 ` darosior
0 siblings, 1 reply; 25+ messages in thread
From: James O'Beirne @ 2022-02-10 23:51 UTC (permalink / raw)
To: darosior; +Cc: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 378 bytes --]
> It's not that simple. As a miner, if i have less than 1vMB of
transactions in my mempool. I don't want a 10sats/vb transaction paying
100000sats by a 100sats/vb transaction paying only 10000sats.
I don't understand why the "<1vMB in the mempool" case is even worth
consideration because the miner will just include the entire mempool in the
next block regardless of feerate.
[-- Attachment #2: Type: text/html, Size: 460 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [bitcoin-dev] Thoughts on fee bumping
2022-02-10 19:40 James O'Beirne
2022-02-10 23:09 ` Greg Sanders
@ 2022-02-10 23:44 ` darosior
2022-02-10 23:51 ` James O'Beirne
2022-02-11 0:12 ` Matt Corallo
2022-02-11 5:26 ` Antoine Riard
3 siblings, 1 reply; 25+ messages in thread
From: darosior @ 2022-02-10 23:44 UTC (permalink / raw)
To: james.obeirne, bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 9868 bytes --]
(I have not yet read the recent posts on RBF but i wanted to react on the "additive feerate".)
> # Purely additive feerate bumps should never be impossible
It's not that simple. As a miner, if i have less than 1vMB of transactions in my mempool. I don't want a 10sats/vb transaction paying 100000sats by a 100sats/vb transaction paying only 10000sats.
Apart from that i very much agree with the approach of taking a step back and reframing, with CPFP being inadapted long term (wasteful, not useful for delegating fee bumping (i'm surprised i didn't mention it publicly but it makes it unsuitable for Revault for instance), and the current carve-out rule makes it only suitable for 2-party protocols), and the `diff` approach.
All that again with the caveat that i need to update myself on the recent proposals.
-------- Original Message --------
On Feb 10, 2022, 20:40, James O'Beirne via bitcoin-dev wrote:
> There's been much talk about fee-bumping lately, and for good reason -
> dynamic fee management is going to be a central part of bitcoin use as
> the mempool fills up (lord willing) and right now fee-bumping is
> fraught with difficulty and pinning peril.
>
> Gloria's recent post on the topic[0] was very lucid and highlights a
> lot of the current issues, as well as some proposals to improve the
> situation.
>
> As others have noted, the post was great. But throughout the course
> of reading it and the ensuing discussion, I became troubled by the
> increasing complexity of both the status quo and some of the
> proposed remedies.
>
> Layering on special cases, more carve-outs, and X and Y percentage
> thresholds is going to make reasoning about the mempool harder than it
> already is. Special consideration for "what should be in the next
> block" and/or the caching of block templates seems like an imposing
> dependency, dragging in a bunch of state and infrastructure to a
> question that should be solely limited to mempool feerate aggregates
> and the feerate of the particular txn package a wallet is concerned
> with.
>
> This is bad enough for protocol designers and Core developers, but
> making the situation any more intractable for "end-users" and wallet
> developers feels wrong.
>
> I thought it might be useful to step back and reframe. Here are a few
> aims that are motivated chiefly by the quality of end-user experience,
> constrained to obey incentive compatibility (i.e. miner reward, DoS
> avoidance). Forgive the abstract dalliance for a moment; I'll talk
> through concretes afterwards.
>
> # Purely additive feerate bumps should never be impossible
>
> Any user should always be able to add to the incentive to mine any
> transaction in a purely additive way. The countervailing force here
> ends up being spam prevention (a la min-relay-fee) to prevent someone
> from consuming bandwidth and mempool space with a long series of
> infinitesimal fee-bumps.
>
> A fee bump, naturally, should be given the same per-byte consideration
> as a normal Bitcoin transaction in terms of relay and block space,
> although it would be nice to come up with a more succinct
> representation. This leads to another design principle:
>
> # The bandwidth and chain space consumed by a fee-bump should be minimal
>
> Instead of prompting a rebroadcast of the original transaction for
> replacement, which contains a lot of data not new to the network, it
> makes more sense to broadcast the "diff" which is the additive
> contribution towards some txn's feerate.
>
> This dovetails with the idea that...
>
> # Special transaction structure should not be required to bump fees
>
> In an ideal design, special structural foresight would not be needed
> in order for a txn's feerate to be improved after broadcast.
>
> Anchor outputs specified solely for CPFP, which amount to many bytes of
> wasted chainspace, are a hack. It's probably uncontroversial at this
> point to say that even RBF itself is kind of a hack - a special
> sequence number should not be necessary for post-broadcast contribution
> toward feerate. Not to mention RBF's seemingly wasteful consumption of
> bandwidth due to the rebroadcast of data the network has already seen.
>
> In a sane design, no structural foresight - and certainly no wasted
> bytes in the form of unused anchor outputs - should be needed in order
> to add to a miner's reward for confirming a given transaction.
>
> Planning for fee-bumps explicitly in transaction structure also often
> winds up locking in which keys are required to bump fees, at odds
> with the idea that...
>
> # Feerate bumps should be able to come from anywhere
>
> One of the practical downsides of CPFP that I haven't seen discussed in
> this conversation is that it requires the transaction to pre-specify the
> keys needed to sign for fee bumps. This is problematic if you're, for
> example, using a vault structure that makes use of pre-signed
> transactions.
>
> What if the key you specified n the anchor outputs for a bunch of
> pre-signed txns is compromised? What if you'd like to be able to
> dynamically select the wallet that bumps fees? CPFP does you no favors
> here.
>
> There is of course a tension between allowing fee bumps to come from
> anywhere and the threat of pinning-like attacks. So we should venture
> to remove pinning as a possibility, in line with the first design
> principle I discuss.
>
> ---
>
> Coming down to earth, the "tabula rasa" thought experiment above has led
> me to favor an approach like the transaction sponsors design that Jeremy
> proposed in a prior discussion back in 2020[1].
>
> Transaction sponsors allow feerates to be bumped after a transaction's
> broadcast, regardless of the structure of the original transaction.
> No rebroadcast (wasted bandwidth) is required for the original txn data.
> No wasted chainspace on only-maybe-used prophylactic anchor outputs.
>
> The interface for end-users is very straightforward: if you want to bump
> fees, specify a transaction that contributes incrementally to package
> feerate for some txid. Simple.
>
> In the original discussion, there were a few main objections that I noted:
>
> 1. In Jeremy's original proposal, only one sponsor txn per txid is
> allowed by policy. A malicious actor could execute a pinning-like
> attack by specifying an only-slightly-helpful feerate sponsor that
> then precludes other larger bumps.
>
> I think there are some ways around this shortcoming. For example: what
> if, by policy, sponsor txns had additional constraints that
>
> - each input must be signed {SIGHASH_SINGLE,SIGHASH_NONE}|ANYONECANPAY,
> - the txn must be specified RBFable,
> - a replacement for the sponsor txn must raise the sponsor feerate,
> including ancestors (maybe this is inherent in "is RBFable," but
> I don't want to conflate absolute feerates into this).
>
> That way, there is still at most a single sponsor txn per txid in the
> mempool, but anyone can "mix in" inputs which bump the effective
> feerate of the sponsor.
>
> This may not be the exact solution we want, but I think it demonstrates
> that the sponsors design has some flexibility and merits some thinking.
>
> The second objection about sponsors was
>
> 2. (from Suhas) sponsors break the classic invariant: "once a valid
> transaction is created, it should not become invalid later on unless
> the inputs are double-spent."
>
> This doesn't seem like a huge concern to me if you consider the txid
> being sponsored as a sort of spiritual input to the sponsor. While the
> theoretical objection against broadening where one has to look in a txn
> to determine its dependencies is understandable, I don't see what the
> practical cost here is.
>
> Reorg complexity seems comparable if not identical, especially if we
> broaden sponsor rules to allow blocks to contain sponsor txns that are
> both for txids in the same block _or_ already included in the chain.
>
> This theoretical concession seems preferable to heaping more rules onto
> an already labyrinthine mempool policy that is difficult for both
> implementers and users to reason about practically and conceptually.
>
> A third objection that wasn't posed, IIRC, but almost certainly would
> be:
>
> 3. Transaction sponsors requires a soft-fork.
>
> Soft-forks are no fun, but I'll tell you what also isn't fun: being on
> the hook to model (and sometimes implement) a dizzying potpourri of
> mempool policies and special-cases. Expecting wallet implementers to
> abide by a maze of rules faithfully in order to ensure txn broadcast and
> fee management invites bugs for perpetuity and network behavior that is
> difficult to reason about a priori. Use of CPFP in the long-term also
> risks needless chain waste.
>
> If a soft-fork is the cost of cleaning up this essential process,
> consideration should be given to paying it as a one-time cost. This
> topic merits a separate post, but consider that in the 5 years leading
> up to the 2017 SegWit drama, we averaged about a soft-fork a year.
> Uncontroversial, "safe" changes to the consensus protocol shouldn't be
> out of the question when significant practical benefit is plain to see.
>
> ---
>
> I hope this message has added some framing to the discussion on fees,
> as well prompting other participants to go back and give the
> transaction sponsor proposal a serious look. The sponsors interface is
> about the simplest I can imagine for wallets, and it seems easy to
> reason about for implementers on Core and elsewhere.
>
> I'm not out to propose soft-forks lightly, but the current complexity
> in fee management feels untenable, and as evidenced by all the
> discussion lately, fees are an increasingly crucial part of the system.
>
> [0]: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019817.html
> [1]: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-September/018168.html
[-- Attachment #2: Type: text/html, Size: 10918 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [bitcoin-dev] Thoughts on fee bumping
2022-02-10 19:40 James O'Beirne
@ 2022-02-10 23:09 ` Greg Sanders
2022-02-10 23:44 ` darosior
` (2 subsequent siblings)
3 siblings, 0 replies; 25+ messages in thread
From: Greg Sanders @ 2022-02-10 23:09 UTC (permalink / raw)
To: James O'Beirne, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 9643 bytes --]
One quick thought to the proposal and perhaps to sponsors in general(didn't
have time to go over original proposal again):
Since sponsors can come from anywhere, the wallet application must have
access to the mempool to know what inputs must be double spent to RBF the
sponsor transaction.
Seems like an important difference to be considered.
On Fri, Feb 11, 2022 at 3:49 AM James O'Beirne via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> There's been much talk about fee-bumping lately, and for good reason -
> dynamic fee management is going to be a central part of bitcoin use as
> the mempool fills up (lord willing) and right now fee-bumping is
> fraught with difficulty and pinning peril.
>
> Gloria's recent post on the topic[0] was very lucid and highlights a
> lot of the current issues, as well as some proposals to improve the
> situation.
>
> As others have noted, the post was great. But throughout the course
> of reading it and the ensuing discussion, I became troubled by the
> increasing complexity of both the status quo and some of the
> proposed remedies.
>
> Layering on special cases, more carve-outs, and X and Y percentage
> thresholds is going to make reasoning about the mempool harder than it
> already is. Special consideration for "what should be in the next
> block" and/or the caching of block templates seems like an imposing
> dependency, dragging in a bunch of state and infrastructure to a
> question that should be solely limited to mempool feerate aggregates
> and the feerate of the particular txn package a wallet is concerned
> with.
>
> This is bad enough for protocol designers and Core developers, but
> making the situation any more intractable for "end-users" and wallet
> developers feels wrong.
>
> I thought it might be useful to step back and reframe. Here are a few
> aims that are motivated chiefly by the quality of end-user experience,
> constrained to obey incentive compatibility (i.e. miner reward, DoS
> avoidance). Forgive the abstract dalliance for a moment; I'll talk
> through concretes afterwards.
>
>
> # Purely additive feerate bumps should never be impossible
>
> Any user should always be able to add to the incentive to mine any
> transaction in a purely additive way. The countervailing force here
> ends up being spam prevention (a la min-relay-fee) to prevent someone
> from consuming bandwidth and mempool space with a long series of
> infinitesimal fee-bumps.
>
> A fee bump, naturally, should be given the same per-byte consideration
> as a normal Bitcoin transaction in terms of relay and block space,
> although it would be nice to come up with a more succinct
> representation. This leads to another design principle:
>
>
> # The bandwidth and chain space consumed by a fee-bump should be minimal
>
> Instead of prompting a rebroadcast of the original transaction for
> replacement, which contains a lot of data not new to the network, it
> makes more sense to broadcast the "diff" which is the additive
> contribution towards some txn's feerate.
>
> This dovetails with the idea that...
>
>
> # Special transaction structure should not be required to bump fees
>
> In an ideal design, special structural foresight would not be needed
> in order for a txn's feerate to be improved after broadcast.
>
> Anchor outputs specified solely for CPFP, which amount to many bytes of
> wasted chainspace, are a hack. It's probably uncontroversial at this
> point to say that even RBF itself is kind of a hack - a special
> sequence number should not be necessary for post-broadcast contribution
> toward feerate. Not to mention RBF's seemingly wasteful consumption of
> bandwidth due to the rebroadcast of data the network has already seen.
>
> In a sane design, no structural foresight - and certainly no wasted
> bytes in the form of unused anchor outputs - should be needed in order
> to add to a miner's reward for confirming a given transaction.
>
> Planning for fee-bumps explicitly in transaction structure also often
> winds up locking in which keys are required to bump fees, at odds
> with the idea that...
>
>
> # Feerate bumps should be able to come from anywhere
>
> One of the practical downsides of CPFP that I haven't seen discussed in
> this conversation is that it requires the transaction to pre-specify the
> keys needed to sign for fee bumps. This is problematic if you're, for
> example, using a vault structure that makes use of pre-signed
> transactions.
>
> What if the key you specified n the anchor outputs for a bunch of
> pre-signed txns is compromised? What if you'd like to be able to
> dynamically select the wallet that bumps fees? CPFP does you no favors
> here.
>
> There is of course a tension between allowing fee bumps to come from
> anywhere and the threat of pinning-like attacks. So we should venture
> to remove pinning as a possibility, in line with the first design
> principle I discuss.
>
>
> ---
>
> Coming down to earth, the "tabula rasa" thought experiment above has led
> me to favor an approach like the transaction sponsors design that Jeremy
> proposed in a prior discussion back in 2020[1].
>
> Transaction sponsors allow feerates to be bumped after a transaction's
> broadcast, regardless of the structure of the original transaction.
> No rebroadcast (wasted bandwidth) is required for the original txn data.
> No wasted chainspace on only-maybe-used prophylactic anchor outputs.
>
> The interface for end-users is very straightforward: if you want to bump
> fees, specify a transaction that contributes incrementally to package
> feerate for some txid. Simple.
>
> In the original discussion, there were a few main objections that I noted:
>
> 1. In Jeremy's original proposal, only one sponsor txn per txid is
> allowed by policy. A malicious actor could execute a pinning-like
> attack by specifying an only-slightly-helpful feerate sponsor that
> then precludes other larger bumps.
>
> I think there are some ways around this shortcoming. For example: what
> if, by policy, sponsor txns had additional constraints that
>
> - each input must be signed {SIGHASH_SINGLE,SIGHASH_NONE}|ANYONECANPAY,
> - the txn must be specified RBFable,
> - a replacement for the sponsor txn must raise the sponsor feerate,
> including ancestors (maybe this is inherent in "is RBFable," but
> I don't want to conflate absolute feerates into this).
>
> That way, there is still at most a single sponsor txn per txid in the
> mempool, but anyone can "mix in" inputs which bump the effective
> feerate of the sponsor.
>
> This may not be the exact solution we want, but I think it demonstrates
> that the sponsors design has some flexibility and merits some thinking.
>
> The second objection about sponsors was
>
> 2. (from Suhas) sponsors break the classic invariant: "once a valid
> transaction is created, it should not become invalid later on unless
> the inputs are double-spent."
>
> This doesn't seem like a huge concern to me if you consider the txid
> being sponsored as a sort of spiritual input to the sponsor. While the
> theoretical objection against broadening where one has to look in a txn
> to determine its dependencies is understandable, I don't see what the
> practical cost here is.
>
> Reorg complexity seems comparable if not identical, especially if we
> broaden sponsor rules to allow blocks to contain sponsor txns that are
> both for txids in the same block _or_ already included in the chain.
>
> This theoretical concession seems preferable to heaping more rules onto
> an already labyrinthine mempool policy that is difficult for both
> implementers and users to reason about practically and conceptually.
>
> A third objection that wasn't posed, IIRC, but almost certainly would
> be:
>
> 3. Transaction sponsors requires a soft-fork.
>
> Soft-forks are no fun, but I'll tell you what also isn't fun: being on
> the hook to model (and sometimes implement) a dizzying potpourri of
> mempool policies and special-cases. Expecting wallet implementers to
> abide by a maze of rules faithfully in order to ensure txn broadcast and
> fee management invites bugs for perpetuity and network behavior that is
> difficult to reason about a priori. Use of CPFP in the long-term also
> risks needless chain waste.
>
> If a soft-fork is the cost of cleaning up this essential process,
> consideration should be given to paying it as a one-time cost. This
> topic merits a separate post, but consider that in the 5 years leading
> up to the 2017 SegWit drama, we averaged about a soft-fork a year.
> Uncontroversial, "safe" changes to the consensus protocol shouldn't be
> out of the question when significant practical benefit is plain to see.
>
> ---
>
> I hope this message has added some framing to the discussion on fees,
> as well prompting other participants to go back and give the
> transaction sponsor proposal a serious look. The sponsors interface is
> about the simplest I can imagine for wallets, and it seems easy to
> reason about for implementers on Core and elsewhere.
>
> I'm not out to propose soft-forks lightly, but the current complexity
> in fee management feels untenable, and as evidenced by all the
> discussion lately, fees are an increasingly crucial part of the system.
>
>
>
> [0]:
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019817.html
> [1]:
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-September/018168.html
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 10925 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* [bitcoin-dev] Thoughts on fee bumping
@ 2022-02-10 19:40 James O'Beirne
2022-02-10 23:09 ` Greg Sanders
` (3 more replies)
0 siblings, 4 replies; 25+ messages in thread
From: James O'Beirne @ 2022-02-10 19:40 UTC (permalink / raw)
To: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 8663 bytes --]
There's been much talk about fee-bumping lately, and for good reason -
dynamic fee management is going to be a central part of bitcoin use as
the mempool fills up (lord willing) and right now fee-bumping is
fraught with difficulty and pinning peril.
Gloria's recent post on the topic[0] was very lucid and highlights a
lot of the current issues, as well as some proposals to improve the
situation.
As others have noted, the post was great. But throughout the course
of reading it and the ensuing discussion, I became troubled by the
increasing complexity of both the status quo and some of the
proposed remedies.
Layering on special cases, more carve-outs, and X and Y percentage
thresholds is going to make reasoning about the mempool harder than it
already is. Special consideration for "what should be in the next
block" and/or the caching of block templates seems like an imposing
dependency, dragging in a bunch of state and infrastructure to a
question that should be solely limited to mempool feerate aggregates
and the feerate of the particular txn package a wallet is concerned
with.
This is bad enough for protocol designers and Core developers, but
making the situation any more intractable for "end-users" and wallet
developers feels wrong.
I thought it might be useful to step back and reframe. Here are a few
aims that are motivated chiefly by the quality of end-user experience,
constrained to obey incentive compatibility (i.e. miner reward, DoS
avoidance). Forgive the abstract dalliance for a moment; I'll talk
through concretes afterwards.
# Purely additive feerate bumps should never be impossible
Any user should always be able to add to the incentive to mine any
transaction in a purely additive way. The countervailing force here
ends up being spam prevention (a la min-relay-fee) to prevent someone
from consuming bandwidth and mempool space with a long series of
infinitesimal fee-bumps.
A fee bump, naturally, should be given the same per-byte consideration
as a normal Bitcoin transaction in terms of relay and block space,
although it would be nice to come up with a more succinct
representation. This leads to another design principle:
# The bandwidth and chain space consumed by a fee-bump should be minimal
Instead of prompting a rebroadcast of the original transaction for
replacement, which contains a lot of data not new to the network, it
makes more sense to broadcast the "diff" which is the additive
contribution towards some txn's feerate.
This dovetails with the idea that...
# Special transaction structure should not be required to bump fees
In an ideal design, special structural foresight would not be needed
in order for a txn's feerate to be improved after broadcast.
Anchor outputs specified solely for CPFP, which amount to many bytes of
wasted chainspace, are a hack. It's probably uncontroversial at this
point to say that even RBF itself is kind of a hack - a special
sequence number should not be necessary for post-broadcast contribution
toward feerate. Not to mention RBF's seemingly wasteful consumption of
bandwidth due to the rebroadcast of data the network has already seen.
In a sane design, no structural foresight - and certainly no wasted
bytes in the form of unused anchor outputs - should be needed in order
to add to a miner's reward for confirming a given transaction.
Planning for fee-bumps explicitly in transaction structure also often
winds up locking in which keys are required to bump fees, at odds
with the idea that...
# Feerate bumps should be able to come from anywhere
One of the practical downsides of CPFP that I haven't seen discussed in
this conversation is that it requires the transaction to pre-specify the
keys needed to sign for fee bumps. This is problematic if you're, for
example, using a vault structure that makes use of pre-signed
transactions.
What if the key you specified n the anchor outputs for a bunch of
pre-signed txns is compromised? What if you'd like to be able to
dynamically select the wallet that bumps fees? CPFP does you no favors
here.
There is of course a tension between allowing fee bumps to come from
anywhere and the threat of pinning-like attacks. So we should venture
to remove pinning as a possibility, in line with the first design
principle I discuss.
---
Coming down to earth, the "tabula rasa" thought experiment above has led
me to favor an approach like the transaction sponsors design that Jeremy
proposed in a prior discussion back in 2020[1].
Transaction sponsors allow feerates to be bumped after a transaction's
broadcast, regardless of the structure of the original transaction.
No rebroadcast (wasted bandwidth) is required for the original txn data.
No wasted chainspace on only-maybe-used prophylactic anchor outputs.
The interface for end-users is very straightforward: if you want to bump
fees, specify a transaction that contributes incrementally to package
feerate for some txid. Simple.
In the original discussion, there were a few main objections that I noted:
1. In Jeremy's original proposal, only one sponsor txn per txid is
allowed by policy. A malicious actor could execute a pinning-like
attack by specifying an only-slightly-helpful feerate sponsor that
then precludes other larger bumps.
I think there are some ways around this shortcoming. For example: what
if, by policy, sponsor txns had additional constraints that
- each input must be signed {SIGHASH_SINGLE,SIGHASH_NONE}|ANYONECANPAY,
- the txn must be specified RBFable,
- a replacement for the sponsor txn must raise the sponsor feerate,
including ancestors (maybe this is inherent in "is RBFable," but
I don't want to conflate absolute feerates into this).
That way, there is still at most a single sponsor txn per txid in the
mempool, but anyone can "mix in" inputs which bump the effective
feerate of the sponsor.
This may not be the exact solution we want, but I think it demonstrates
that the sponsors design has some flexibility and merits some thinking.
The second objection about sponsors was
2. (from Suhas) sponsors break the classic invariant: "once a valid
transaction is created, it should not become invalid later on unless
the inputs are double-spent."
This doesn't seem like a huge concern to me if you consider the txid
being sponsored as a sort of spiritual input to the sponsor. While the
theoretical objection against broadening where one has to look in a txn
to determine its dependencies is understandable, I don't see what the
practical cost here is.
Reorg complexity seems comparable if not identical, especially if we
broaden sponsor rules to allow blocks to contain sponsor txns that are
both for txids in the same block _or_ already included in the chain.
This theoretical concession seems preferable to heaping more rules onto
an already labyrinthine mempool policy that is difficult for both
implementers and users to reason about practically and conceptually.
A third objection that wasn't posed, IIRC, but almost certainly would
be:
3. Transaction sponsors requires a soft-fork.
Soft-forks are no fun, but I'll tell you what also isn't fun: being on
the hook to model (and sometimes implement) a dizzying potpourri of
mempool policies and special-cases. Expecting wallet implementers to
abide by a maze of rules faithfully in order to ensure txn broadcast and
fee management invites bugs for perpetuity and network behavior that is
difficult to reason about a priori. Use of CPFP in the long-term also
risks needless chain waste.
If a soft-fork is the cost of cleaning up this essential process,
consideration should be given to paying it as a one-time cost. This
topic merits a separate post, but consider that in the 5 years leading
up to the 2017 SegWit drama, we averaged about a soft-fork a year.
Uncontroversial, "safe" changes to the consensus protocol shouldn't be
out of the question when significant practical benefit is plain to see.
---
I hope this message has added some framing to the discussion on fees,
as well prompting other participants to go back and give the
transaction sponsor proposal a serious look. The sponsors interface is
about the simplest I can imagine for wallets, and it seems easy to
reason about for implementers on Core and elsewhere.
I'm not out to propose soft-forks lightly, but the current complexity
in fee management feels untenable, and as evidenced by all the
discussion lately, fees are an increasingly crucial part of the system.
[0]:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019817.html
[1]:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-September/018168.html
[-- Attachment #2: Type: text/html, Size: 9696 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
end of thread, other threads:[~2022-02-18 9:01 UTC | newest]
Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-18 2:08 [bitcoin-dev] Thoughts on fee bumping Prayank
-- strict thread matches above, loose matches on Subject: below --
2022-02-18 0:54 Prayank
2022-02-10 19:40 James O'Beirne
2022-02-10 23:09 ` Greg Sanders
2022-02-10 23:44 ` darosior
2022-02-10 23:51 ` James O'Beirne
2022-02-11 6:51 ` darosior
2022-02-12 19:44 ` Billy Tetrud
2022-02-11 0:12 ` Matt Corallo
2022-02-14 19:51 ` James O'Beirne
2022-02-17 14:32 ` Anthony Towns
2022-02-17 18:18 ` James O'Beirne
2022-02-18 9:01 ` darosior
2022-02-18 0:35 ` Antoine Riard
2022-02-11 5:26 ` Antoine Riard
2022-02-14 20:28 ` James O'Beirne
2022-02-15 0:43 ` Antoine Riard
2022-02-15 17:09 ` Billy Tetrud
2022-02-15 20:24 ` Russell O'Connor
2022-02-15 20:53 ` James O'Beirne
2022-02-15 21:37 ` Jeremy Rubin
2022-02-15 21:38 ` Jeremy Rubin
2022-02-16 2:54 ` Billy Tetrud
2022-02-16 19:18 ` James O'Beirne
2022-02-16 20:36 ` Billy Tetrud
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox