Hi Mempoololic Anonymous fellow,
> 2. Staggered broadcast of replacement transactions: within some time
> interval, maybe accept multiple replacements for the same prevout, but only
> relay the original transaction.
If the goal of replacement staggering is to save on bandwidth, I'm not sure it's going to be effective if you consider replacement done from a shared-utxo. E.g, Alice broadcasts a package to confirm her commitment, relay is staggered until T. At the same time, Bob broadcasts a package to confirm his version of the commitment at a slightly better feerate, relay is staggered until T.
At T, package A gradually floods from Alice's peers and package B does the same from Bob's peers. When there is an intersection. B overrides A and starts to replace package A in the network mempools nearest to Alice. I think those peers won't have bandwidth saving from adopting a replacement staggering strategy.
Or maybe that's something completely different if you have in mind ? I think it's worth more staggering detail to guess if it's robust against all the replacement propagations patterns.
Though if we aim to save on replacement bandwidth I wonder if a "diff-only" strategy, assuming some new p2p mechanism, would be more interesting (as discussed in the recent "Thoughts on fee bumping thread").
> A lingering concern that I have about this idea is it would then be
> possible to impact the propagation of another person’s transaction, i.e.,
> an attacker can censor somebody’s transaction from ever being announced by
> a node if they send enough transactions to fill up the rate limit.
> Obviously this would be expensive since they're spending a lot on fees, but
> I imagine it could be profitable in some situations to spend a few thousand
> dollars to prevent anyone from hearing about a transaction for a few hours.
> This might be a non-issue in practice if the rate limit is generous and
> traffic isn’t horrendous, but is this a problem?
I think I share the concern too about an attacker exhausting a node transaction relay ressources to prevent another person's transaction to propagate, especially if the transaction targeted is a L2's time-sensitive one. In that latter context, an attacker would aim to delay the relay of a time-sensitive transaction (e.g a HTLC-success) to the miners, until the timelock expires. The malicious delay period should swallow the go-to-chain HTLC deadline ("the deadline for received HTLCs this node fulfilled" in bolt 2 parlance), in that current example 18 blocks.
Let's say we allocate 10 MB of bandwidth per-block period. Once the 10 MB are exhausted, there is no more bandwidth allocated until the next block is issued. If the top mempool feerate is 1 sat/vb, such naive design would allow an attacker to buy all the p2p network bandwidth period for 0.1 BTC. If an attacker aims to jam a HTLC transaction for the 18 blocks period, the cost is of 1,8 BTC. If the attacker is a LN counterparty to a HTLC worth more than 1.8 BTC, the attack sounds economically profitable.
Worst, the p2p network bandwidth is a public resource while a HTLC is a private, off-chain contract. An attacker could be counterparty to many HTLCs, where each HTLC individual value is far inferior to the global p2p bandwidth cost but the sum, only known to the attacker, is superior to. Therefore, it sounds to me that p2p network bandwidth might be attractive if the stealing are batched.
Is the attacker scenario described credible ? Are the numbers sketched out realistic ?
If yes, I think one design insight for eventual transaction relay rate limiting would be to make them "dynamic", and not naively fixed for a period. By making them dynamic, an attacker would have to compete with the effective feerate proposed by the victim transaction. E.g, if the HTLC-success feerate is of 10 sat/vb, an attacker would have to propose a stream of malicious transaction of more than 10 sat/vb during the whole HTLC deadline period for the transaction-relay jamming to be effective.
Further, the attack might be invisible from the victim standpoint, the malicious flow of feerate competitive transactions can be hard to dissociate from an honest one. Thus, you can expect the
HTLC transaction issuer to only slowly increase the feerate at each block, and those moves to be anticipated by the attacker. Even if the transaction issuer adopts a scorched-earth approach for the latest blocks of the deadline, the absolute value of the HTLC burnt in fees might still be less than the transaction relay bandwidth exhaustion paid by the attacker because the attack is batched by the attacker.
I'm not sure if this reasoning is correct. Though if yes, the issue sounds really similar to "flood&loot" attack affecting LN previously researched on [0]. What worries me more with this "exhaust&loot" is that if we introduce bounded transaction relay rate limiting, it sounds a cheaper public ressource to buy than the mempool..
[0]
https://arxiv.org/pdf/2006.08513.pdfAnyway, I would say it's worthy to investigate more transaction relay rate limiting designs and especially carefully weigh the implications for L2s. Those ones might have to adjust their fee-bumping and transaction rebroadcast strategies in consequence.
> Suhas and Matt [proposed][0] adding a policy rule allowing users to specify
> descendant limits on their transactions. For example, some nth bit of
> nSequence with nVersion 3 means "this transaction won't have more than X
> vbytes of descendants" where X = max(1000, vsizeof(tx)) or something. It
> solves the pinning problem with package RBF where the attacker's package
> contains a very large and high-fee descendant.
Hey, what if the pinning transaction has a parent with a junk feerate ?
Let's say you have commitment tx for a HTLC of value 500000 sats, with top mempool feerate of 50 sat/vbyte. The commitment tx is pinned by a malicious tx of size 1000 vbytes, matching top mempool feerate. This malicious tx has a second unconfirmed parent (in addition to the commitment) of size MAX_STANDARD_TX_WEIGHT offering a 1 sat/vb. I think the pinning transaction ancestor score would be less than 2 sat/vb and thus considered irrelevant for block template inclusion ? At the same time, as the pinning transaction is attached with a top mempool feerate, the honest user wouldn't be able to replace it with a better-feerate proposal ? Unless adopting a scorched-earth approach, although economically I don't think this fee-bumping strategy is safe in case of batch-pinning.
It might be fixable if we make one additional requirement "The child transaction subject to the user-elected descendant limit must have only one unconfirmed parent" (here the commitment
transaction) ? Though I'm not even sure of the robustness of this fix. The commitment transaction itself could be used as a junk parent to downgrade the pinning transaction ancestor score. E.g, using a revoked commitment transaction with `max_accepted_htlcs` on both sides, pre-signed with a feerate of 1 sat/vb. We might restrict the maximum number of pending HTLCs network-wise to make the worst commitment transaction size reasonable, though not sure if my LN colleagues are going to like the idea..
Is that reasoning correct and conform to our Ancestor Set Based algorithm approach ? Maybe more details are needed.
> Also, coming back to the idea of "we can't just use {individual, ancestor}
> feerate," I'm interested in soliciting feedback on adding a “mining score”
> calculator. I've implemented one [here][2] which takes the transaction in
> question, grabs all of the connected mempool transactions (including
> siblings, coparents, etc., as they wouldn’t be in the ancestor nor
> descendant sets), and builds a “block template” using our current mining
> algorithm. The mining score of a transaction is the ancestor feerate at
> which it is included.
I don't have a strong opinion there yet, though if we make this "block template" construction the default one, I would be really conservative to avoid malicious child attachment on multi-party transactions downgrading the block inclusion efficiency.