From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp3.osuosl.org (smtp3.osuosl.org [140.211.166.136]) by lists.linuxfoundation.org (Postfix) with ESMTP id 532E6C000B for ; Tue, 15 Mar 2022 01:43:54 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp3.osuosl.org (Postfix) with ESMTP id 37A2A60F46 for ; Tue, 15 Mar 2022 01:43:54 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org X-Spam-Flag: NO X-Spam-Score: -2.098 X-Spam-Level: X-Spam-Status: No, score=-2.098 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no Authentication-Results: smtp3.osuosl.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from smtp3.osuosl.org ([127.0.0.1]) by localhost (smtp3.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Q1UvAbZdjx_A for ; Tue, 15 Mar 2022 01:43:51 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.8.0 Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com [IPv6:2a00:1450:4864:20::630]) by smtp3.osuosl.org (Postfix) with ESMTPS id BED08606BF for ; Tue, 15 Mar 2022 01:43:50 +0000 (UTC) Received: by mail-ej1-x630.google.com with SMTP id yy13so38053476ejb.2 for ; Mon, 14 Mar 2022 18:43:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=LKF95bTQ3dUBvkZbqukGITaONR25CNal1rg4hQkPhVw=; b=hT67DwVEMrpgkCH90J/kIelxKyMuSmru6NBVSVpEo7aba01g9rh287JQzJ4YvmC7hC umza6fs6hLr+KLPxnAHtHiCpklsjqFgWnVJoRq1HwyxNw2I2M9P+rrseIS0bOSp9pjS/ XfdmUnYgXQFW/y2Reo1WFjsnXNhtnfXi0kpPe6D8a4DkVPU4tXa9EyL+Hzapj04WccYE Y/75+0cSNrTOUvPFnBIPkp0E5cbY3muPujMEvU67b4NpYAULemNJGcwuLmMjq9IyGV2c J8fYkaLmbApA2Dwa+rMFreVHZsKw1PJZa8AGVh/W3W8Q6xv7sQovh2CzhlpfWclXU15K QYkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=LKF95bTQ3dUBvkZbqukGITaONR25CNal1rg4hQkPhVw=; b=rMvZHndmH8RuPsWw+TuhctUTRCkMqh5zCyNczjleZYK3hYHOuKRDxKGZYdducXDKB2 p/qcGg0Knud7fEz7crdyHzK8pIzEBRm9gDd31gHUY32i1trKTXiCyyfS1bzsZmDeAyhb Bj29cogxhWZJMxHGjaJ/a/8GgnwFBYxL0dNx9zN/avq6XwVEYfGzHnoZf8AR2KH3Tivj xLPRtf124Gk3ke/5rXvQcJ9qq4ApD/yVrzkOg36wlyatFhyRaJWU8mMUUsg7U77O1Y8z i3J2Li3b0aaQ6fFy6gcP9P0MvuzeCCACM7uXdle3cQ2PD2nNvN63rdr1/gR3hGcjka2w T7/Q== X-Gm-Message-State: AOAM530Y7hxblTMQXrEBqTosw/NrsIRs58Xi/OiFHnV7o7eXBFfI1IOe B3UUL6XP+5AkJu1dCsxnGYI0JfFV7fatgbzoTNkMK8yBMWQ= X-Google-Smtp-Source: ABdhPJyiEBtKoawXhdFTThcf2rGAIHXqhSTPmjbeP6bxmuayQCILHNodyHMHNmkyPTeb0AFwpMVnpDyMAQEEcWyoP5s= X-Received: by 2002:a17:906:4fd3:b0:6db:d516:482b with SMTP id i19-20020a1709064fd300b006dbd516482bmr5090028ejw.257.1647308628686; Mon, 14 Mar 2022 18:43:48 -0700 (PDT) MIME-Version: 1.0 References: <20220208045850.GA6538@erisian.com.au> In-Reply-To: From: Billy Tetrud Date: Mon, 14 Mar 2022 20:43:31 -0500 Message-ID: To: Gloria Zhao Content-Type: multipart/alternative; boundary="000000000000a5fa0205da37ed5a" X-Mailman-Approved-At: Tue, 15 Mar 2022 09:56:57 +0000 Cc: Bitcoin Protocol Discussion , Anthony Towns Subject: Re: [bitcoin-dev] Improving RBF Policy X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Bitcoin Protocol Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Mar 2022 01:43:54 -0000 --000000000000a5fa0205da37ed5a Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hi Gloria, It seems you're responding to what I wrote on github. Happy to respond, but perhaps we should keep it there so the conversation is kept linearly together? I'm curious what you think of my thoughts on what you brought up most recently in this thread about rate limiting / staggered window dos protection stuff. Cheers, BT On Mon, Mar 14, 2022 at 5:29 AM Gloria Zhao wrote: > Hi Billy, > > > We should expect miners will be using a more complex, more optimal way > of determining what blocks they're working on [...] we should instead run > with the assumption that miners keep all potentially relevant transaction= s > in their mempools, including potentially many conflicting transctions, in > order to create the most profitable blocks. And therefore we shouldn't pu= t > the constraint on normal non-mining full nodes to do that same more-compl= ex > mempool behavior or add any complexity for the purpose of denying > transaction replacements. > > > I think a lot of the complexity in these ideas is because of the attemp= t > to match relay rules with miner > inclusion rules. > > I think the assumption that miners are using a completely different > implementation of mempool and block template building is false. IIUC, mos= t > miners use Bitcoin Core and perhaps configure their node differently (e.g= . > larger mempool and different minfeerates), but also use `getblocktemplate= ` > which means the same ancestor package-based mining algorithm. > > Of course, I'm not a miner, so if anybody is a miner or has seen miners' > setups, please correct me if I'm wrong. > > In either case, we would want our mining algorithm to result in block > templates that are as close as possible to perfectly incentive compatibil= e. > > Fundamentally, I believe default mempool policy (which perhaps naturally > creates a network-wide transaction relay policy) should be as close to th= e > mining code as possible. Imagine node A only keeps 1 block's worth of > transactions, and node B keeps a (default) 300MB mempool. The contents of > node A's mempool should be as close as possible to a block template > generated from node B's mempool. Otherwise, node A's mempool is not very > useful - their fee estimation is flawed and compact block relay won't do > them much good if they need to re-request a lot of block transactions. > Next, imagine that node B is a miner. It would be very suboptimal if the > mining code was ancestor package-based (i.e. supports CPFP), but the > mempool policy only cared about individual feerate, and evicted low-fee > parents despite their high-fee children. It's easy to see why this would = be > suboptimal. > Attempting to match mempool policy with the mining algorithm is also > arguably the point of package relay. Our mining code uses ancestor packag= es > which is good, but we only submit transactions one at a time to the > mempool, so a transaction's high-fee children can't be considered until > they are all already in the mempool. Package relay allows us to start > thinking about ancestor packages immediately when evaluating transactions > for submission to the mempool. > > The attempt to match policy with miner inclusion rules is deliberate and > necessary. > > > I want to echo James O'Beirne's opinion on this that this may be the > wrong path to go down (a path of more complexity without much gain). He > said: "Special consideration for "what should be in the next block" and/o= r > the caching of block templates seems like an imposing dependency, draggin= g > in a bunch of state and infrastructure to a question that should be solel= y > limited to mempool feerate aggregates and the feerate of the particular t= xn > package a wallet is concerned with." > > It seems that I under-explained the purpose of building/caching block > templates in my original post, since both you and James have the same > misunderstanding. Since RBF's introduction, we have improved to an ancest= or > package-based mining algorithm. This supports CPFP (incentive compatible) > and it is now common to see more complex "families" of transactions as > users fee-bump transactions (market is working, yay). On the other hand, = we > no longer have an accurate way of determining a transaction's "mining > score," i.e., the feerate of this transaction's ancestor package when it = is > included in a block template using our current mining algorithm. > > This limitation is a big blocker in proposing new fee/feerate RBF rules. > For example, if we say "the transaction needs a better feerate," this is > obviously flawed, since the original transactions may have very > high-feerate children, and the replacement transaction may have low feera= te > parents. So what we really want is "the transaction needs to be more > incentive compatible to mine based on our mining algorithm," but we have = no > way of getting that information right now. > > In my original post, I [described 4 heuristics to get transaction's > "mining score"][1] using the current data cached in the mempool (e.g. max > ancestor feerate of descendant set), as well as why they don't work. As > such, the best way to calculate a transaction's mining score AFAICT is to > grab all of the related transactions and build a mini "block template" wi= th > them. The [implementation][2] I sent last week also cuts out some of the > fluff, so the pseudocode looks like this: > > // Get ALL connected entries (ancestors, descendants, siblings, cousins, > coparents, etc.) > vector cluster =3D mempool.GetAllTransactionsRelatedTo(tx= id); > sort(cluster, ancestorfeerate); > > // For deducting ancestors when they are included separately > vector modified_entries; > > while (!cluster.empty() and !modified_entries.empty()) { > iter =3D BetterAncestorFeerateOf(cluster, modified_entries); > best_ancestor_package =3D GetAncestorSet(iter); > mining_score =3D Feerate(best_ancestor_package); > for (entry : best_ancestor_package) { > mining_scores[entry] =3D mining_score; > for (descendant : GetAllDescendants(entry) { > > modified_entries[descendant].DeductAncestorFromAccounting(entry); > } > } > } > > Perhaps somebody will come up with a better way to do this, but this is m= y > solution, and I hope I've now sufficiently described why I've made an > effort to think about this. It's not because I want to make things more > fancy or complicated, but because we currently have no way of accurately > determining a transaction's mining score. The reason I proposed a > persistent block template is so we can efficiently query the mining score > of all transactions in the mempool. > > > However, I do think that these DOS concerns are quite overblown. I wrot= e > up a comment on your rbf-improvements.md > detailing > my thought process on that. The summary is that as long as the fee-rate > relay rule is maintained, any "spam" is actually paid for, either by the > "first" transaction in the spam chain, or by the "spam" itself. > > The DoS concern is not overblown. I recommend you re-read the [current RB= F > policy][3]; Rule 3 and 4 *are* the feerate relay rules. Removing Rule 3 > means allowing recycled fees, so new transactions are not necessarily > "paying" anything. > > [1]: > https://gist.github.com/glozow/25d9662c52453bd08b4b4b1d3783b9ff?permalink= _comment_id=3D4093100#mining-score-of-a-mempool-transaction > [2]: https://github.com/glozow/bitcoin/tree/2022-02-mining-score > [3]: > https://github.com/bitcoin/bitcoin/blob/master/doc/policy/mempool-replace= ments.md > > Best, > Gloria > > On Sat, Mar 12, 2022 at 8:18 AM Billy Tetrud > wrote: > >> In reading through more of the discussion, it seems the idea I presented >> above might basically be a reformulation of t-bast's rate-limiting idea >> presented in this comment >> . >> Perhaps he could comment on whether that characterization is accurate. >> >> On Fri, Mar 11, 2022 at 10:22 AM Billy Tetrud >> wrote: >> >>> Hi Gloria, >>> >>> > 1. Transaction relay rate limiting >>> >>> I have a similar concern as yours, that this could prevent higher >>> fee-rate transactions from being broadcast. >>> >>> > 2. Staggered broadcast of replacement transactions: within some time >>> interval, maybe accept multiple replacements for the same prevout, but = only >>> relay the original transaction. >>> >>> By this do you mean basically having a batching window where, on >>> receiving a replacement transaction, a node will wait for a period of t= ime, >>> potentially receiving many replacements for the same transaction (or ma= ny >>> separate conflicting transactions), and only broadcasting the "best" on= e(s) >>> at the end of that time window? >>> >>> Its an interesting idea, but it would produce a problem. Every hop that >>> replacement transaction takes would be delayed by this staggered window= . If >>> the window were 3 minutes and transactions generally take 20 hops to ge= t to >>> the majority of miners, a "best-case average" delay might be 3.75 minut= es >>> (noting that among your 8 nodes, its quite likely one of them would hav= e a >>> window ending much sooner than 3 minutes). Some (maybe 3% of) nodes wou= ld >>> experience delays of more than 20 minutes. That kind of delay isn't gre= at. >>> >>> However it made me think of another idea: a transaction replacement >>> broadcast cooldown. What if nodes kept track of the time they broadcast= ed >>> the last replacement for a package and had a relay cooldown after the l= ast >>> replacement was broadcasted? A node receiving a replacement would relay= the >>> replacement immediately if the package its replacing was broadcasted mo= re >>> than X seconds ago, and otherwise it would wait until the time when tha= t >>> package was broadcasted at least X seconds ago to broadcast it. Any >>> replacements it receives during that waiting period would replace as >>> normal, meaning the unrebroadcasted replacement would never be >>> broadcasted, and only the highest value replacement would be broadcaste= d at >>> the end of the cooldown. >>> >>> This wouldn't prevent a higher-fee-rate transaction from being >>> broadcasted (like rate limiting could), but would still be effective at >>> limiting unnecessary data transmission. Another benefit is that in the >>> non-adversarial case, replacement transactions wouldn't be subject to a= ny >>> delay at all (while in the staggered broadcast idea, most replacements >>> would experience some delay). And in the adversarial case, where a >>> malicious actor broadcasts a low-as-possible-value replacement just bef= ore >>> yours, the worst case delay is just whatever the cooldown period is. I >>> would imagine that maybe 1 minute would be a reasonable worst-case dela= y. >>> This would limit spam for a transaction that makes it into a block to ~= 10x >>> (9 to 1). I don't see much of a downside to doing this beyond just the >>> slight additional complexity of relay rules (and considering it could s= ave >>> substantial additional code complexity, even that is a benefit). >>> >>> All a node would need to do is keep a timestamp on each transaction the= y >>> receive for when it was broadcasted and check it when a replacement com= es >>> in. If now-broadcastDate < cooldown, set a timer for cooldown - >>> (now-broadcastDate) to broadcast it. If another replacement comes in, c= lear >>> that timer and repeat using the original broadcast date (since the >>> unbroadcast transaction doesn't have a broadcast date yet). >>> >>> I think it might also be useful to note that eliminating "extra data" >>> caused by careless or malicious actors (spam or whatever you want to ca= ll >>> it) should not be the goal. It is impossible to prevent all spam. What = we >>> should be aiming for is more specific: we should attempt to design a sy= stem >>> where spam is manageable. Eg if our goal is to ensure that a bitcoin no= de >>> uses no more than 10% of the bandwidth of a "normal" user, if current >>> non-spam traffic only requires 1% of a "normal" users's bandwidth, then= the >>> network can bear a 9 to 1 ratio of spam. When a node spins up, there is= a >>> lot more data to download and process. So we know that all full nodes c= an >>> handle at least as much traffic as they handle during IBD. What's the >>> difference between those amounts? I'm not sure, but I would guess that = IBD >>> is at least a couple times more demanding than a fully synced node. So = I >>> might suggest that as long as spam can be kept below a ratio of maybe 2= to >>> 1, we should consider the design acceptable (and therefore more complex= ity >>> unnecessary). >>> >>> The 1 minute broadcast cooldown I mentioned before wouldn't be quite >>> sufficient to achieve that ratio. But a 3.33 minute cooldown would be. >>> Whether this is "too much" is something that would have to be discussed= , I >>> suspect a worst-case adversarial 3.33 minute delay would not be "too mu= ch". >>> Doing this could basically eliminate any risk of actual service denial = via >>> replacement transactions. >>> >>> However, I do think that these DOS concerns are quite overblown. I wrot= e >>> up a comment on your rbf-improvements.md >>> detailing >>> my thought process on that. The summary is that as long as the fee-rate >>> relay rule is maintained, any "spam" is actually paid for, either by th= e >>> "first" transaction in the spam chain, or by the "spam" itself. Even >>> without something like a minimum RBF relay delay limiting how much spam >>> could be created, the economics of the fee-rate rule already sufficient= ly >>> mitigate the issue of spam. >>> On Wed, Mar 9, 2022 at 9:37 AM Gloria Zhao via bitcoin-dev < >>> bitcoin-dev@lists.linuxfoundation.org> wrote: >>> >>>> Hi RBF friends, >>>> >>>> Posting a summary of RBF discussions at coredev (mostly on transaction >>>> relay rate-limiting), user-elected descendant limit as a short term >>>> solution to unblock package RBF, and mining score, all open for feedba= ck: >>>> >>>> One big concept discussed was baking DoS protection into the p2p level >>>> rather than policy level. TLDR: The fees are not paid to the node oper= ator, >>>> but to the miner. While we can use fees to reason about the cost of an >>>> attack, if we're ultimately interested in preventing resource exhausti= on, >>>> maybe we want to "stop the bleeding" when it happens and bound the amo= unt >>>> of resources used in general. There were two main ideas: >>>> >>>> 1. Transaction relay rate limiting (i.e. the one you proposed above or >>>> some variation) with a feerate-based priority queue >>>> 2. Staggered broadcast of replacement transactions: within some time >>>> interval, maybe accept multiple replacements for the same prevout, but= only >>>> relay the original transaction. >>>> >>>> Looking to solicit feedback on these ideas and the concept in general. >>>> Is it a good idea (separate from RBF) to add rate-limiting in transact= ion >>>> relay? And is it the right direction to think about RBF DoS protection= this >>>> way? >>>> >>>> A lingering concern that I have about this idea is it would then be >>>> possible to impact the propagation of another person=E2=80=99s transac= tion, i.e., >>>> an attacker can censor somebody=E2=80=99s transaction from ever being = announced by >>>> a node if they send enough transactions to fill up the rate limit. >>>> Obviously this would be expensive since they're spending a lot on fees= , but >>>> I imagine it could be profitable in some situations to spend a few tho= usand >>>> dollars to prevent anyone from hearing about a transaction for a few h= ours. >>>> This might be a non-issue in practice if the rate limit is generous an= d >>>> traffic isn=E2=80=99t horrendous, but is this a problem? >>>> >>>> And if we don't require an increase in (i.e. addition of "new") >>>> absolute fees, users are essentially allowed to =E2=80=9Crecycle=E2=80= =9D fees. In the >>>> scenario where we prioritize relay based on feerate, users could >>>> potentially be placed higher in the queue, ahead of other users=E2=80= =99 >>>> transactions, multiple times, without ever adding more fees to the >>>> transaction. Again, maybe this isn=E2=80=99t a huge deal in practice i= f we set the >>>> parameters right, but it seems=E2=80=A6 not great, in principle. >>>> >>>> --------- >>>> >>>> It's probably also a good idea to point out that there's been some >>>> discussion happening on the gist containing my original post on this t= hread >>>> (https://gist.github.com/glozow/25d9662c52453bd08b4b4b1d3783b9ff). >>>> >>>> Suhas and Matt [proposed][0] adding a policy rule allowing users to >>>> specify descendant limits on their transactions. For example, some nth= bit >>>> of nSequence with nVersion 3 means "this transaction won't have more t= han X >>>> vbytes of descendants" where X =3D max(1000, vsizeof(tx)) or something= . It >>>> solves the pinning problem with package RBF where the attacker's packa= ge >>>> contains a very large and high-fee descendant. >>>> >>>> We could add this policy and deploy it with package RBF/package relay >>>> so that LN can use it by setting the user-elected descendant limit fla= g on >>>> commitment transactions. (Otherwise package RBF is blocked until we fi= nd a >>>> more comprehensive solution to the pinning attack). >>>> >>>> It's simple to [implement][1] as a mempool policy, but adds some >>>> complexity for wallets that use it, since it limits their use of UTXOs= from >>>> transactions with this bit set. >>>> >>>> --------- >>>> >>>> Also, coming back to the idea of "we can't just use {individual, >>>> ancestor} feerate," I'm interested in soliciting feedback on adding a >>>> =E2=80=9Cmining score=E2=80=9D calculator. I've implemented one [here]= [2] which takes the >>>> transaction in question, grabs all of the connected mempool transactio= ns >>>> (including siblings, coparents, etc., as they wouldn=E2=80=99t be in t= he ancestor >>>> nor descendant sets), and builds a =E2=80=9Cblock template=E2=80=9D us= ing our current >>>> mining algorithm. The mining score of a transaction is the ancestor fe= erate >>>> at which it is included. >>>> >>>> This would be helpful for something like ancestor-aware funding and >>>> fee-bumping in the wallet: [3], [4]. I think if we did the rate-limite= d >>>> priority queue for transaction relay, we'd want to use something like = this >>>> as the priority value. And for RBF, we probably want to require that a >>>> replacement have a higher mining score than the original transactions.= This >>>> could be computationally expensive to do all the time; it could be goo= d to >>>> cache it but that could make mempool bookkeeping more complicated. Als= o, if >>>> we end up trying to switch to a candidate set-based algorithm for mini= ng, >>>> we'd of course need a new calculator. >>>> >>>> [0]: >>>> https://gist.github.com/glozow/25d9662c52453bd08b4b4b1d3783b9ff?permal= ink_comment_id=3D4058140#gistcomment-4058140 >>>> [1]: https://github.com/glozow/bitcoin/tree/2022-02-user-desclimit >>>> [2] https://github.com/glozow/bitcoin/tree/2022-02-mining-score >>>> [3]: https://github.com/bitcoin/bitcoin/issues/9645 >>>> [4]: https://github.com/bitcoin/bitcoin/issues/15553 >>>> >>>> Best, >>>> Gloria >>>> >>>> On Tue, Feb 8, 2022 at 4:58 AM Anthony Towns wrote= : >>>> >>>>> On Mon, Feb 07, 2022 at 11:16:26AM +0000, Gloria Zhao wrote: >>>>> > @aj: >>>>> > > I wonder sometimes if it could be sufficient to just have a relay >>>>> rate >>>>> > > limit and prioritise by ancestor feerate though. Maybe something >>>>> like: >>>>> > > - instead of adding txs to each peers setInventoryTxToSend >>>>> immediately, >>>>> > > set a mempool flag "relayed=3Dfalse" >>>>> > > - on a time delay, add the top N (by fee rate) "relayed=3Dfalse" = txs >>>>> to >>>>> > > each peer's setInventoryTxToSend and mark them as "relayed=3Dtr= ue"; >>>>> > > calculate how much kB those txs were, and do this again after >>>>> > > SIZE/RATELIMIT seconds >>>>> >>>>> > > - don't include "relayed=3Dfalse" txs when building blocks? >>>>> >>>>> The "?" was me not being sure that point is a good suggestion... >>>>> >>>>> Miners might reasonably decide to have no rate limit, and always rela= y, >>>>> and never exclude txs -- but the question then becomes is whether the= y >>>>> hear about the tx at all, so rate limiting behaviour could still be a >>>>> potential problem for whoever made the tx. >>>>> >>>>> > Wow cool! I think outbound tx relay size-based rate-limiting and >>>>> > prioritizing tx relay by feerate are great ideas for preventing >>>>> spammers >>>>> > from wasting bandwidth network-wide. I agree, this would slow the l= ow >>>>> > feerate spam down, preventing a huge network-wide bandwidth spike. >>>>> And it >>>>> > would allow high feerate transactions to propagate as they should, >>>>> > regardless of how busy traffic is. Combined with inbound tx request >>>>> > rate-limiting, might this be sufficient to prevent DoS regardless o= f >>>>> the >>>>> > fee-based replacement policies? >>>>> >>>>> I think you only want to do outbound rate limits, ie, how often you >>>>> send >>>>> INV, GETDATA and TX messages? Once you receive any of those, I think >>>>> you have to immediately process / ignore it, you can't really sensibl= y >>>>> defer it (beyond the existing queues we have that just build up while >>>>> we're busy processing other things first)? >>>>> >>>>> > One point that I'm not 100% clear on: is it ok to prioritize the >>>>> > transactions by ancestor feerate in this scheme? As I described in >>>>> the >>>>> > original post, this can be quite different from the actual feerate >>>>> we would >>>>> > consider a transaction in a block for. The transaction could have a >>>>> high >>>>> > feerate sibling bumping its ancestor. >>>>> > For example, A (1sat/vB) has 2 children: B (49sat/vB) and C >>>>> (5sat/vB). If >>>>> > we just received C, it would be incorrect to give it a priority >>>>> equal to >>>>> > its ancestor feerate (3sat/vB) because if we constructed a block >>>>> template >>>>> > now, B would bump A, and C's new ancestor feerate is 5sat/vB. >>>>> > Then, if we imagine that top N is >5sat/vB, we're not relaying C. I= f >>>>> we >>>>> > also exclude C when building blocks, we're missing out on good fees= . >>>>> >>>>> I think you're right that this would be ugly. It's something of a >>>>> special case: >>>>> >>>>> a) you really care about C getting into the next block; but >>>>> b) you're trusting B not being replaced by a higher fee tx that >>>>> doesn't have A as a parent; and >>>>> c) there's a lot of txs bidding the floor of the next block up to a >>>>> level in-between the ancestor fee rate of 3sat/vB and the tx fee >>>>> rate of 5sat/vB >>>>> >>>>> Without (a), maybe you don't care about it getting to a miner quickly= . >>>>> If your trust in (b) was misplaced, then your tx's effective fee rate >>>>> will drop and (because of (c)), you'll lose anyway. And if the spam >>>>> ends >>>>> up outside of (c)'s range, either the rate limiting won't take effect >>>>> (spam's too cheap) and you'll be fine, or you'll miss out on the bloc= k >>>>> anyway (spam's paying more than your tx rate) and you never had any >>>>> hope >>>>> of making it in. >>>>> >>>>> Note that we already rate limit via INVENTORY_BROADCAST_MAX / >>>>> *_INVENTORY_BROADCAST_INTERVAL; which gets to something like 10,500 t= xs >>>>> per 10 minutes for outbound connections. This would be a weight based >>>>> rate limit instead-of/in-addition-to that, I guess. >>>>> >>>>> As far as a non-ugly approach goes, I think you'd have to be smarter >>>>> about >>>>> tracking the "effective fee rate" than the ancestor fee rate manages; >>>>> maybe that's something that could fall out of Murch and Clara's >>>>> candidate >>>>> set blockbuilding ideas [0] ? >>>>> >>>>> Perhaps that same work would also make it possible to come up with >>>>> a better answer to "do I care that this replacement would invalidate >>>>> these descendents?" >>>>> >>>>> [0] https://github.com/Xekyo/blockbuilding >>>>> >>>>> > > - keep high-feerate evicted txs around for a while in case they g= et >>>>> > > mined by someone else to improve compact block relay, a la the >>>>> > > orphan pool? >>>>> > Replaced transactions are already added to vExtraTxnForCompact :D >>>>> >>>>> I guess I was thinking that it's just a 100 tx LRU cache, which might >>>>> not be good enough? >>>>> >>>>> Maybe it would be more on point to have a rate limit apply only to >>>>> replacement transactions? >>>>> >>>>> > For wallets, AJ's "All you need is for there to be *a* path that >>>>> follows >>>>> > the new relay rules and gets from your node/wallet to perhaps 10% o= f >>>>> > hashpower" makes sense to me (which would be the former). >>>>> >>>>> Perhaps a corollarly of that is that it's *better* to have the mempoo= l >>>>> acceptance rule only consider economic incentives, and have the spam >>>>> prevention only be about "shall I tell my peers about this?" >>>>> >>>>> If you don't have that split; then the anti-spam rules can prevent yo= u >>>>> from getting the tx in the mempool at all; whereas if you do have the >>>>> split, then even if the bitcoind anti-spam rules are blocking you at >>>>> every turn, you can still send your tx to miners by some other route, >>>>> and then they can add it to their mempool directly without any hassle= . >>>>> >>>>> Cheers, >>>>> aj >>>>> >>>>> _______________________________________________ >>>> bitcoin-dev mailing list >>>> bitcoin-dev@lists.linuxfoundation.org >>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev >>>> >>> --000000000000a5fa0205da37ed5a Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi Gloria,

It seems you= 're responding to what I wrote on github. Happy to respond, but perhaps= we should keep it there so the conversation is kept linearly together?=C2= =A0

I'm curious what you think of my thoughts = on what you brought up most recently in this thread about rate limiting / s= taggered window dos protection stuff.=C2=A0

= Cheers,
BT


On Mon, Mar 14, 2022 at 5:29 AM Gl= oria Zhao <gloriajzhao@gmail.co= m> wrote:
Hi Billy,

> We should expect = miners will be using a more complex, more optimal way of determining what b= locks they're working on [...] we should instead run with the assumptio= n that miners keep all potentially relevant transactions in their mempools,= including potentially many conflicting transctions, in order to create the= most profitable blocks. And therefore we shouldn't put the constraint = on normal non-mining full nodes to do that same more-complex mempool behavi= or or add any complexity for the purpose of denying transaction replacement= s.

> I think a lot of the complexity in these ideas is because of= the attempt to match relay rules with miner
inclusion rules.

I = think the assumption that miners are using a completely different implement= ation of mempool and block template building is false. IIUC, most miners us= e Bitcoin Core and perhaps configure their node differently (e.g. larger me= mpool and different minfeerates), but also use `getblocktemplate` which mea= ns the same ancestor package-based mining algorithm.

Of course, I= 9;m not a miner, so if anybody is a miner or has seen miners' setups, p= lease correct me if I'm wrong.

In either case, we would want our= mining algorithm to result in block templates that are as close as possibl= e to perfectly incentive compatibile.

Fundamentally, I believe defau= lt mempool policy (which perhaps naturally creates a network-wide transacti= on relay policy) should be as close to the mining code as possible. Imagine= node A only keeps 1 block's worth of transactions, and node B keeps a = (default) 300MB mempool. The contents of node A's mempool should be as = close as possible to a block template generated from node B's mempool. = Otherwise, node A's mempool is not very useful - their fee estimation i= s flawed and compact block relay won't do them much good if they need t= o re-request a lot of block transactions.
Next, imagine that node B is a= miner. It would be very suboptimal if the mining code was ancestor package= -based (i.e. supports CPFP), but the mempool policy only cared about indivi= dual feerate, and evicted low-fee parents despite their high-fee children. = It's easy to see why this would be suboptimal.
Attempting to match m= empool policy with the mining algorithm is also arguably the point of packa= ge relay. Our mining code uses ancestor packages which is good, but we only= submit transactions one at a time to the mempool, so a transaction's h= igh-fee children can't be considered until they are all already in the = mempool. Package relay allows us to start thinking about ancestor packages = immediately when evaluating transactions for submission to the mempool.
=
The attempt to match policy with miner inclusion rules is deliberate an= d necessary.

> I want to echo James O'Beirne's opinion on= this that this may be the wrong path to go down (a path of more complexity= without much gain). He said: "Special consideration for "what sh= ould be in the next block" and/or the caching of block templates seems= like an imposing dependency, dragging in a bunch of state and infrastructu= re to a question that should be solely limited to mempool feerate aggregate= s and the feerate of the particular txn package a wallet is concerned with.= "

It seems that I under-explained the purpose of building/cachi= ng block templates in my original post, since both you and James have the s= ame misunderstanding. Since RBF's introduction, we have improved to an = ancestor package-based mining algorithm. This supports CPFP (incentive comp= atible) and it is now common to see more complex "families" of tr= ansactions as users fee-bump transactions (market is working, yay). On the = other hand, we no longer have an accurate way of determining a transaction&= #39;s "mining score," i.e., the feerate of this transaction's= ancestor package when it is included in a block template using our current= mining algorithm.

This limitation is a big blocker in proposing new= fee/feerate RBF rules. For example, if we say "the transaction needs = a better feerate," this is obviously flawed, since the original transa= ctions may have very high-feerate children, and the replacement transaction= may have low feerate parents. So what we really want is "the transact= ion needs to be more incentive compatible to mine based on our mining algor= ithm," but we have no way of getting that information right now.
In my original post, I [described 4 heuristics to get transaction's &= quot;mining score"][1] using the current data cached in the mempool (e= .g. max ancestor feerate of descendant set), as well as why they don't = work. As such, the best way to calculate a transaction's mining score A= FAICT is to grab all of the related transactions and build a mini "blo= ck template" with them. The [implementation][2] I sent last week also = cuts out some of the fluff, so the pseudocode looks like this:

// Ge= t ALL connected entries (ancestors, descendants, siblings, cousins, coparen= ts, etc.)
vector<TxMempoolEntry> cluster =3D mempool.GetAllTransac= tionsRelatedTo(txid);
sort(cluster, ancestorfeerate);

// For dedu= cting ancestors when they are included separately
vector<ModifiedTxMe= mpoolEntry> modified_entries;

while (!cluster.empty() and !modifi= ed_entries.empty()) {
=C2=A0 =C2=A0 iter =3D BetterAncestorFeerateOf(clu= ster, modified_entries);
=C2=A0 =C2=A0 best_ancestor_package =3D GetAnce= storSet(iter);
=C2=A0 =C2=A0 mining_score =3D Feerate(best_ancestor_pack= age);
=C2=A0 =C2=A0 for (entry : best_ancestor_package) {
=C2=A0=C2= =A0 =C2=A0 =C2=A0 mining_scores[entry] =3D mining_score;
=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0 for (descendant : GetAllDescendants(entry) {
=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 modified_entries[= descendant].DeductAncestorFromAccounting(entry);
=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 }
=C2=A0=C2=A0=C2=A0 }
}

Perhaps somebody will= come up with a better way to do this, but this is my solution, and I hope = I've now sufficiently described why I've made an effort to think ab= out this. It's not because I want to make things more fancy or complica= ted, but because we currently have no way of accurately determining a trans= action's mining score. The reason I proposed a persistent block templat= e is so we can efficiently query the mining score of all transactions in th= e mempool.



[1]= :
https://gist.github.com/glozow/25d9662c52453bd08b4b4b1d3783b= 9ff?permalink_comment_id=3D4093100#mining-score-of-a-mempool-transaction

Best,
Gloria

<= div class=3D"gmail_quote">
On Sat, Mar= 12, 2022 at 8:18 AM Billy Tetrud <billy.tetrud@gmail.com> wrote:
In reading th= rough more of the discussion, it seems the idea I presented above might bas= ically be a reformulation of t-bast's rate-limiting idea presented in <= a href=3D"https://gist.github.com/glozow/25d9662c52453bd08b4b4b1d3783b9ff?p= ermalink_comment_id=3D4081349#gistcomment-4081349" target=3D"_blank">this c= omment. Perhaps he could comment on whether that characterization is ac= curate.=C2=A0

On Fri, Mar 11, 2022 at 10:22 AM Billy Tetrud <billy.tetrud@gmail.com&= gt; wrote:
Hi Gloria,

>=C2=A0 1. Transaction relay rate limiting

I h= ave a similar concern as yours, that this could prevent higher fee-rate tra= nsactions from being broadcast.=C2=A0

> 2.=C2=A0Staggered=C2=A0broadcast of repl= acement transactions: within some time interval, maybe accept multiple repl= acements for the same prevout, but only relay the original transaction.
=

By this do you mean basically having a batching window= where, on receiving a replacement transaction, a node will wait for a peri= od of time, potentially receiving many replacements for the same transactio= n (or many separate conflicting transactions), and only broadcasting the &q= uot;best" one(s) at the end of that time window?=C2=A0

<= /div>
Its an interesting idea, but it would produce a problem. Every ho= p that replacement transaction takes would be delayed by this staggered win= dow. If the window were 3 minutes and transactions generally take 20 hops t= o get to the majority of miners, a "best-case average" delay migh= t be 3.75 minutes (noting that among your 8 nodes, its=C2=A0quite likely on= e of them would have a window ending much sooner than 3 minutes). Some (may= be 3% of) nodes would experience delays of more than 20 minutes. That kind = of delay isn't great.=C2=A0

However it made me= think of another idea: a transaction replacement broadcast cooldown. What = if nodes kept track of the time they broadcasted the last replacement for a= package and had a relay cooldown after the last replacement was broadcaste= d? A node receiving a replacement would relay the replacement immediately i= f the package its replacing was broadcasted more than X seconds ago, and ot= herwise it would wait until the time when that package was broadcasted at l= east X seconds ago to broadcast it. Any replacements it receives during tha= t waiting period would replace as normal, meaning the unrebroadcasted repla= cement would never be broadcasted,=C2=A0and only the highest value replacem= ent would be broadcasted at the end of the cooldown.

This wouldn't prevent a higher-fee-rate transaction from being broad= casted (like rate limiting could), but would still be effective=C2=A0at lim= iting unnecessary data transmission. Another benefit is that in the non-adv= ersarial case, replacement transactions wouldn't be subject to any dela= y at all (while in the staggered broadcast idea, most replacements would ex= perience some delay). And in the adversarial case, where a malicious actor = broadcasts a low-as-possible-value replacement just before yours, the=C2=A0= worst case delay is just whatever the cooldown period is. I would imagine t= hat maybe 1 minute would be a reasonable worst-case delay. This would limit= spam for a transaction that makes it into a block to ~10x (9 to 1). I don&= #39;t see much of a downside to doing this beyond just the slight additiona= l complexity of relay rules (and considering it could save substantial addi= tional code complexity, even that is a benefit).=C2=A0

=
All a node would need to do is keep a timestamp on each transaction th= ey receive for when it was broadcasted and check it when a replacement come= s in. If now-broadcastDate < cooldown, set a timer for cooldown - (now-b= roadcastDate) to broadcast it. If another replacement comes in, clear that = timer and repeat using the original broadcast date (since the unbroadcast t= ransaction doesn't have a broadcast date yet).=C2=A0

I think it might also be useful to note that eliminating "extra= data" caused by careless or malicious actors (spam or whatever you wa= nt to call it) should not be the goal. It is impossible to prevent all spam= . What we should be aiming for is more specific: we should attempt to desig= n a system where spam is manageable. Eg if our goal is to ensure that a bit= coin node uses no more than 10% of the bandwidth of a "normal" us= er, if current non-spam traffic only requires 1% of a "normal" us= ers's=C2=A0bandwidth, then the network can bear a 9 to 1 ratio of spam.= When a node spins up, there is a lot more data to download and process. So= we know that all full nodes can handle at least as much traffic as they ha= ndle during IBD. What's the difference between those amounts? I'm n= ot sure, but I would guess that IBD is at least a couple times more demandi= ng than a fully synced node. So I might suggest that as long as spam can be= kept below a ratio of maybe 2 to 1, we should consider the design acceptab= le (and therefore more complexity unnecessary).=C2=A0

<= div>The 1 minute broadcast cooldown I mentioned before wouldn't be quit= e sufficient to achieve that ratio. But a 3.33 minute cooldown would be. Wh= ether this is "too much" is something that would have to be discu= ssed,=C2=A0I suspect a worst-case adversarial 3.33 minute delay would not b= e "too much". Doing this could basically eliminate any risk of ac= tual service denial via replacement transactions.=C2=A0

However, I do think that these DOS concerns are quite overblown. I wr= ote up a comment on your rbf-improvements.md=C2=A0detailing my thought proce= ss on that. The summary is that as long as the fee-rate relay rule is maint= ained, any "spam" is actually paid for, either by the "first= " transaction in the spam chain, or by the "spam" itself. Ev= en without something like a minimum RBF relay delay limiting how much spam = could be created, the economics of the fee-rate rule already sufficiently m= itigate the issue of spam.=C2=A0
=
On Wed, Mar 9, 2022 at 9:37 AM Gloria= Zhao via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wro= te:
Hi RBF friends,

Posting a summary of= RBF discussions at coredev (mostly on transaction relay rate-limiting), us= er-elected descendant limit as a short term solution to unblock package RBF= , and mining score, all open for feedback:

One big concept discussed= was baking DoS protection into the p2p level rather than policy level. TLD= R: The fees are not paid to the node operator, but to the miner. While we c= an use fees to reason about the cost of an attack, if we're ultimately = interested in preventing resource exhaustion, maybe we want to "stop t= he bleeding" when it happens and bound the amount of resources used in= general. There were two main ideas:

1. Transaction relay rate limit= ing (i.e. the one you proposed above or some variation) with a feerate-base= d priority queue
2. Staggered broadcast of replacement transactions: wit= hin some time interval, maybe accept multiple replacements for the same pre= vout, but only relay the original transaction.

Looking to solicit fe= edback on these ideas and the concept in general. Is it a good idea (separa= te from RBF) to add rate-limiting in transaction relay? And is it the right= direction to think about RBF DoS protection this way?

A lingering c= oncern that I have about this idea is it would then be possible to impact t= he propagation of another person=E2=80=99s transaction, i.e., an attacker c= an censor somebody=E2=80=99s transaction from ever being announced by a nod= e if they send enough transactions to fill up the rate limit. Obviously thi= s would be expensive since they're spending a lot on fees, but I imagin= e it could be profitable in some situations to spend a few thousand dollars= to prevent anyone from hearing about a transaction for a few hours. This m= ight be a non-issue in practice if the rate limit is generous and traffic i= sn=E2=80=99t horrendous, but is this a problem?

And if we don't = require an increase in (i.e. addition of "new") absolute fees, us= ers are essentially allowed to =E2=80=9Crecycle=E2=80=9D fees. In the scena= rio where we prioritize relay based on feerate, users could potentially be = placed higher in the queue, ahead of other users=E2=80=99 transactions, mul= tiple times, without ever adding more fees to the transaction. Again, maybe= this isn=E2=80=99t a huge deal in practice if we set the parameters right,= but it seems=E2=80=A6 not great, in principle.

---------

It's probably also a good idea= to point out that there's been some discussion happening on the gist c= ontaining my original post on this thread (https://gist.= github.com/glozow/25d9662c52453bd08b4b4b1d3783b9ff).

Suhas and Matt [proposed][0] adding a policy rule allowing users to = specify descendant limits on their transactions. For example, some nth bit = of nSequence with nVersion 3 means "this transaction won't have mo= re than X vbytes of descendants" where X =3D max(1000, vsizeof(tx)) or= something. It solves the pinning problem with package RBF where the attack= er's package contains a very large and high-fee descendant.
<= br>
We could add this policy and deploy it with package RBF/packa= ge relay so that LN can use it by setting the user-elected descendant limit= flag on commitment transactions. (Otherwise package RBF is blocked until w= e find a more comprehensive solution to the pinning attack).

=
It's simple to [implement][1] as a mempool policy, but adds = some complexity for wallets that use it, since it limits their use of UTXOs= from transactions with this bit set.

--------= -

Also, coming back to the idea of "we can= 9;t just use {individual, ancestor} feerate," I'm interested in so= liciting feedback on adding a =E2=80=9Cmining score=E2=80=9D calculator. I&= #39;ve implemented one [here][2] which takes the transaction in question, g= rabs all of the connected mempool transactions (including siblings, coparen= ts, etc., as they wouldn=E2=80=99t be in the ancestor nor descendant sets),= and builds a =E2=80=9Cblock template=E2=80=9D using our current mining alg= orithm. The mining score of a transaction is the ancestor feerate at which = it is included.

This would be helpful for some= thing like ancestor-aware funding and fee-bumping in the wallet: [3], [4]. = I think if we did the rate-limited priority queue for transaction relay, we= 'd want to use something like this as the priority value. And for RBF, = we probably want to require that a replacement have a higher mining score t= han the original transactions. This could be computationally expensive to d= o all the time; it could be good to cache it but that could make mempool bo= okkeeping more complicated. Also, if we end up trying to switch to a candid= ate set-based algorithm for mining, we'd of course need a new calculato= r.


Best,
Gloria

On Tue, Feb 8, 2= 022 at 4:58 AM Anthony Towns <aj@erisian.com.au> wrote:
On Mon, Feb 07, 2022 at 11:16:26AM +0000, Glo= ria Zhao wrote:
> @aj:
> > I wonder sometimes if it could be sufficient to just have a relay= rate
> > limit and prioritise by ancestor feerate though. Maybe something = like:
> > - instead of adding txs to each peers setInventoryTxToSend immedi= ately,
> >=C2=A0 =C2=A0set a mempool flag "relayed=3Dfalse"
> > - on a time delay, add the top N (by fee rate) "relayed=3Dfa= lse" txs to
> >=C2=A0 =C2=A0each peer's setInventoryTxToSend and mark them as= "relayed=3Dtrue";
> >=C2=A0 =C2=A0calculate how much kB those txs were, and do this aga= in after
> >=C2=A0 =C2=A0SIZE/RATELIMIT seconds

> > - don't include "relayed=3Dfalse" txs when building= blocks?

The "?" was me not being sure that point is a good suggestion...<= br>
Miners might reasonably decide to have no rate limit, and always relay,
and never exclude txs -- but the question then becomes is whether they
hear about the tx at all, so rate limiting behaviour could still be a
potential problem for whoever made the tx.

> Wow cool! I think outbound tx relay size-based rate-limiting and
> prioritizing tx relay by feerate are great ideas for preventing spamme= rs
> from wasting bandwidth network-wide. I agree, this would slow the low<= br> > feerate spam down, preventing a huge network-wide bandwidth spike. And= it
> would allow high feerate transactions to propagate as they should,
> regardless of how busy traffic is. Combined with inbound tx request > rate-limiting, might this be sufficient to prevent DoS regardless of t= he
> fee-based replacement policies?

I think you only want to do outbound rate limits, ie, how often you send INV, GETDATA and TX messages? Once you receive any of those, I think
you have to immediately process / ignore it, you can't really sensibly<= br> defer it (beyond the existing queues we have that just build up while
we're busy processing other things first)?

> One point that I'm not 100% clear on: is it ok to prioritize the > transactions by ancestor feerate in this scheme? As I described in the=
> original post, this can be quite different from the actual feerate we = would
> consider a transaction in a block for. The transaction could have a hi= gh
> feerate sibling bumping its ancestor.
> For example, A (1sat/vB) has 2 children: B (49sat/vB) and C (5sat/vB).= If
> we just received C, it would be incorrect to give it a priority equal = to
> its ancestor feerate (3sat/vB) because if we constructed a block templ= ate
> now, B would bump A, and C's new ancestor feerate is 5sat/vB.
> Then, if we imagine that top N is >5sat/vB, we're not relaying = C. If we
> also exclude C when building blocks, we're missing out on good fee= s.

I think you're right that this would be ugly. It's something of a special case:

=C2=A0a) you really care about C getting into the next block; but
=C2=A0b) you're trusting B not being replaced by a higher fee tx that =C2=A0 =C2=A0 doesn't have A as a parent; and
=C2=A0c) there's a lot of txs bidding the floor of the next block up to= a
=C2=A0 =C2=A0 level in-between the ancestor fee rate of 3sat/vB and the tx = fee
=C2=A0 =C2=A0 rate of 5sat/vB

Without (a), maybe you don't care about it getting to a miner quickly.<= br> If your trust in (b) was misplaced, then your tx's effective fee rate will drop and (because of (c)), you'll lose anyway. And if the spam end= s
up outside of (c)'s range, either the rate limiting won't take effe= ct
(spam's too cheap) and you'll be fine, or you'll miss out on th= e block
anyway (spam's paying more than your tx rate) and you never had any hop= e
of making it in.

Note that we already rate limit via INVENTORY_BROADCAST_MAX /
*_INVENTORY_BROADCAST_INTERVAL; which gets to something like 10,500 txs
per 10 minutes for outbound connections. This would be a weight based
rate limit instead-of/in-addition-to that, I guess.

As far as a non-ugly approach goes, I think you'd have to be smarter ab= out
tracking the "effective fee rate" than the ancestor fee rate mana= ges;
maybe that's something that could fall out of Murch and Clara's can= didate
set blockbuilding ideas [0] ?

Perhaps that same work would also make it possible to come up with
a better answer to "do I care that this replacement would invalidate these descendents?"

[0] https://github.com/Xekyo/blockbuilding

> > - keep high-feerate evicted txs around for a while in case they g= et
> >=C2=A0 =C2=A0mined by someone else to improve compact block relay,= a la the
> >=C2=A0 =C2=A0orphan pool?
> Replaced transactions are already added to vExtraTxnForCompact :D

I guess I was thinking that it's just a 100 tx LRU cache, which might not be good enough?

Maybe it would be more on point to have a rate limit apply only to
replacement transactions?

> For wallets, AJ's "All you need is for there to be *a* path t= hat follows
> the new relay rules and gets from your node/wallet to perhaps 10% of > hashpower" makes sense to me (which would be the former).

Perhaps a corollarly of that is that it's *better* to have the mempool<= br> acceptance rule only consider economic incentives, and have the spam
prevention only be about "shall I tell my peers about this?"

If you don't have that split; then the anti-spam rules can prevent you<= br> from getting the tx in the mempool at all; whereas if you do have the
split, then even if the bitcoind anti-spam rules are blocking you at
every turn, you can still send your tx to miners by some other route,
and then they can add it to their mempool directly without any hassle.

Cheers,
aj

_______________________________________________
bitcoin-dev mailing list
= bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mail= man/listinfo/bitcoin-dev
--000000000000a5fa0205da37ed5a--