From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp3.osuosl.org (smtp3.osuosl.org [140.211.166.136]) by lists.linuxfoundation.org (Postfix) with ESMTP id C29EBC000B for ; Fri, 28 Jan 2022 01:35:27 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp3.osuosl.org (Postfix) with ESMTP id A38C960B20 for ; Fri, 28 Jan 2022 01:35:27 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org X-Spam-Flag: NO X-Spam-Score: -2.301 X-Spam-Level: X-Spam-Status: No, score=-2.301 tagged_above=-999 required=5 tests=[BAYES_20=-0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_MED=-2.3, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no Received: from smtp3.osuosl.org ([127.0.0.1]) by localhost (smtp3.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Jkkxo7XMDAkw for ; Fri, 28 Jan 2022 01:35:26 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0 Received: from outgoing.mit.edu (outgoing-auth-1.mit.edu [18.9.28.11]) by smtp3.osuosl.org (Postfix) with ESMTPS id 53B9860AFC for ; Fri, 28 Jan 2022 01:35:26 +0000 (UTC) Received: from mail-lf1-f52.google.com (mail-lf1-f52.google.com [209.85.167.52]) (authenticated bits=0) (User authenticated as jlrubin@ATHENA.MIT.EDU) by outgoing.mit.edu (8.14.7/8.12.4) with ESMTP id 20S1ZNrB028414 (version=TLSv1/SSLv3 cipher=AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 27 Jan 2022 20:35:24 -0500 Received: by mail-lf1-f52.google.com with SMTP id y15so8713925lfa.9 for ; Thu, 27 Jan 2022 17:35:24 -0800 (PST) X-Gm-Message-State: AOAM531WWhHqqr/BK+Yff5HQdvuBG7dDFJbBDbH3Up518XSqA/XOLlpm NJsE9MC5Rux3Hk1gVPFPlPbFcKAP7kG+EaOc4hQ= X-Google-Smtp-Source: ABdhPJzu7aflrEwDXj/mSo1D/O85fgjPxh/A8E/V/nGdLW11/Yd1ZjRE4FeYxbRbqf+EKJGb8oJYJtRNEreEBX1mZaQ= X-Received: by 2002:a05:6512:31cf:: with SMTP id j15mr4534286lfe.670.1643333722827; Thu, 27 Jan 2022 17:35:22 -0800 (PST) MIME-Version: 1.0 References: In-Reply-To: From: Jeremy Date: Thu, 27 Jan 2022 17:35:11 -0800 X-Gmail-Original-Message-ID: Message-ID: To: Gloria Zhao , Bitcoin Protocol Discussion Content-Type: multipart/alternative; boundary="000000000000cbf31d05d69a72cb" Subject: Re: [bitcoin-dev] Improving RBF Policy X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Bitcoin Protocol Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Jan 2022 01:35:27 -0000 --000000000000cbf31d05d69a72cb Content-Type: text/plain; charset="UTF-8" Gloria, This is a brilliant post! Great job systematizing many of the issues. Quite a lot to chew on & I hope other readers of this list digest the post fully. Three things come to mind as partial responses: under: - **DoS Protection**: Limit two types of DoS attacks on the node's > mempool: (1) the number of times a transaction can be replaced and > (2) the volume of transactions that can be evicted during a > replacement. I'd more simply put it: Limiting the amount of work that must be done to consider the replacement We don't particularly care about goal (1) or goal (2), we care about how much it costs to do (1) or (2). And there are scenarios where the (1) or (2) might not be particularly high, but the total work still might be. I can give you some examples to consider if needed. There are also scenarios where (1) and (2) might be high, but the cost is low overall. Therefore it makes sense to be a little more general with what the anti-DoS goal is. An issue I'd like to toss into the mix is that of iterative / additive batching. E.g., https://bitcoinops.org/en/cardcoins-rbf-batching/ This is where an business puts a txn in the mempool that pays to N users, and as they see additional requests for payouts the update it to N+2, N+2... N+M payouts. This iterative batching can be highly efficient because the number of transactions per business per 10 minutes is 1 (with variable number of outputs). One issue with this approach today is that because of the feerate rule, if you go from N to N+1 you need to pay 1 sat/byte over the whole txn. Applied M times, and you have to increase fees quadratically for this approach. Therefore the less efficient long-chain of batches model ends up being 'rational' with respect to mempool policy and irrational with respect to "optimally packing blocks with transactions". If the absolute fee rule is dropped, but feerate remains, one thing you might see is businesses doing iterative batches with N+2M outputs whereby they drop 2 outputs for every input they add, allowing the iterative batch to always increase the fee-rate but possibly not triggering the quadratic feerate issue since the transaction gets smaller over time. Another possible solution to this would be to allow relaying "txdiffs" which only require re-relay of signatures + new/modified outputs, and not the entire tx. I think this iterative batching is pretty desirable to support, and so I'd like to see a RBF model which doesn't make it "unfairly" expensive. (I'll spare everyone the details on how CTV batching also solves this, but feel free to ask elsewhere.) A counterargument to additive batching is that if you instead do non iterative batches every minute, and you have 100 txns that arrive uniformly, you'd end up with 10 batches of size 10 on average. The bulk of the benefit under this model is in the non-batched to batched transition, and the iterative part only saves on space/fees marginally after that point. A final point is that a verifiable delay function could be used over, e.g., each of the N COutpoints individually to rate-limit transaction replacement. The VDF period can be made shorter / eliminated depending on the feerate increase. E.g., always consider a much higher feerate txn whenever available, for things of equal feerate only consider 1 per minute. A VDF is like proof-of-work that doesn't parallelize, in case you are unfamiliar, so no matter how many computers you have it would take about the same amount of time (you could parallelize across N outputs, of course, but you're still bound minimally to the time it takes to replace 1 output, doing all outputs individually just is the most flexible option). Cheers, Jeremy --000000000000cbf31d05d69a72cb Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Gloria,<= /div>

This is a brilliant post! Great job systematizing many of the issues.= Quite a lot to chew on & I hope other readers of this list digest the = post fully.

Three things come to mind as partial responses:

u= nder:

- **= DoS Protection**: Limit two types of DoS attacks on the node's
=C2=A0 mempool: (1) the number of times a transaction can be replaced and=
(2) the volume of transactions that can be evicted during a
<= /span>replacement.

=
I'd more simply put it:

L= imiting the amount of work that must be done to consider the replacement

We don't particularly=C2=A0care about goal (1) or goal (2), we care= about how much it costs to do (1) or (2). And there are scenarios where th= e (1) or (2) might not be particularly high, but the total work still might= be. I can give you some examples to consider if needed. There are also sce= narios where (1) and (2) might be high, but the cost is low overall. Theref= ore=C2=A0it makes sense to be a little more general with what the anti-DoS = goal is.




An iss= ue I'd like to toss into the mix is that of iterative / additive batchi= ng. E.g., htt= ps://bitcoinops.org/en/cardcoins-rbf-batching/

This is where an b= usiness puts a txn in the mempool that pays to N users, and as they see add= itional requests for payouts the update it to N+2, N+2... N+M payouts. This= iterative batching can be highly efficient because the number of transacti= ons per business per 10 minutes is 1 (with variable number of outputs).

One issue with this approach today is that because of the feerate rule, = if you go from N to N+1 you need to pay 1 sat/byte over the whole txn. Appl= ied M times, and you have to increase fees quadratically for this approach.= Therefore the less efficient long-chain of batches model ends up being = 9;rational' with respect to mempool policy and irrational with respect = to "optimally packing blocks with transactions".=C2=A0

If t= he absolute fee rule is dropped, but feerate remains, one thing you might s= ee is businesses doing iterative batches with N+2M outputs whereby they dro= p 2 outputs for every input they add, allowing the iterative batch to alway= s increase the fee-rate but possibly not=C2=A0triggering the quadratic feer= ate issue since the transaction gets smaller over time.

Another possi= ble solution to this would be to allow relaying "txdiffs" which o= nly require re-relay of signatures=C2=A0+ new/modified outputs, and not the= entire tx.

I think this iterative batching is pretty desirable to su= pport, and so I'd like to see a RBF model which doesn't make it &qu= ot;unfairly" expensive.=C2=A0
<= br>
(I'll spare everyone the deta= ils on how CTV batching also solves this, but feel free to ask elsewhere.)<= /div>

A counterargument to additive batching is that if you instead do non = iterative batches every minute, and you have 100 txns that arrive uniformly= , you'd end up with 10 batches of size 10 on average. The bulk of the b= enefit under this model is in the non-batched to batched transition, and th= e iterative part only saves on space/fees marginally after that point.



A final point is that a verifiable delay function could be use= d over, e.g., each of the N COutpoints individually to rate-limit transacti= on replacement. The VDF period can be made shorter / eliminated depending o= n the feerate increase. E.g., always consider a much higher feerate txn whe= never available, for things of equal feerate only consider 1 per minute. A = VDF is like proof-of-work that doesn't parallelize, in case you are unf= amiliar, so no matter how many computers you have it would take about the s= ame amount of time (you could parallelize across N outputs, of course, but = you're still bound minimally to the time it takes to replace 1 output, = doing all outputs individually just is the most flexible option).=C2=A0


Cheers,

Jeremy
--000000000000cbf31d05d69a72cb--