Hi RBF friends,
Posting a summary of RBF discussions at coredev (mostly on transaction relay rate-limiting), user-elected descendant limit as a short term solution to unblock package RBF, and mining score, all open for feedback:
One big concept discussed was baking DoS protection into the p2p level rather than policy level. TLDR: The fees are not paid to the node operator, but to the miner. While we can use fees to reason about the cost of an attack, if we're ultimately interested in preventing resource exhaustion, maybe we want to "stop the bleeding" when it happens and bound the amount of resources used in general. There were two main ideas:
1. Transaction relay rate limiting (i.e. the one you proposed above or some variation) with a feerate-based priority queue
2. Staggered broadcast of replacement transactions: within some time interval, maybe accept multiple replacements for the same prevout, but only relay the original transaction.
Looking to solicit feedback on these ideas and the concept in general. Is it a good idea (separate from RBF) to add rate-limiting in transaction relay? And is it the right direction to think about RBF DoS protection this way?
A lingering concern that I have about this idea is it would then be possible to impact the propagation of another person’s transaction, i.e., an attacker can censor somebody’s transaction from ever being announced by a node if they send enough transactions to fill up the rate limit. Obviously this would be expensive since they're spending a lot on fees, but I imagine it could be profitable in some situations to spend a few thousand dollars to prevent anyone from hearing about a transaction for a few hours. This might be a non-issue in practice if the rate limit is generous and traffic isn’t horrendous, but is this a problem?
And if we don't require an increase in (i.e. addition of "new") absolute fees, users are essentially allowed to “recycle” fees. In the scenario where we prioritize relay based on feerate, users could potentially be placed higher in the queue, ahead of other users’ transactions, multiple times, without ever adding more fees to the transaction. Again, maybe this isn’t a huge deal in practice if we set the parameters right, but it seems… not great, in principle.
---------
Suhas and Matt [proposed][0] adding a policy rule allowing users to specify descendant limits on their transactions. For example, some nth bit of nSequence with nVersion 3 means "this transaction won't have more than X vbytes of descendants" where X = max(1000, vsizeof(tx)) or something. It solves the pinning problem with package RBF where the attacker's package contains a very large and high-fee descendant.
We could add this policy and deploy it with package RBF/package relay so that LN can use it by setting the user-elected descendant limit flag on commitment transactions. (Otherwise package RBF is blocked until we find a more comprehensive solution to the pinning attack).
It's simple to [implement][1] as a mempool policy, but adds some complexity for wallets that use it, since it limits their use of UTXOs from transactions with this bit set.
---------
Also, coming back to the idea of "we can't just use {individual, ancestor} feerate," I'm interested in soliciting feedback on adding a “mining score” calculator. I've implemented one [here][2] which takes the transaction in question, grabs all of the connected mempool transactions (including siblings, coparents, etc., as they wouldn’t be in the ancestor nor descendant sets), and builds a “block template” using our current mining algorithm. The mining score of a transaction is the ancestor feerate at which it is included.
This would be helpful for something like ancestor-aware funding and fee-bumping in the wallet: [3], [4]. I think if we did the rate-limited priority queue for transaction relay, we'd want to use something like this as the priority value. And for RBF, we probably want to require that a replacement have a higher mining score than the original transactions. This could be computationally expensive to do all the time; it could be good to cache it but that could make mempool bookkeeping more complicated. Also, if we end up trying to switch to a candidate set-based algorithm for mining, we'd of course need a new calculator.
Best,
Gloria