From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id 031ECEC4 for ; Mon, 29 Jul 2019 03:07:14 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.7.6 Received: from mail-wm1-f54.google.com (mail-wm1-f54.google.com [209.85.128.54]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id C25052C6 for ; Mon, 29 Jul 2019 03:07:13 +0000 (UTC) Received: by mail-wm1-f54.google.com with SMTP id a15so52410584wmj.5 for ; Sun, 28 Jul 2019 20:07:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ib.tc; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to; bh=FsL7Iz382nwBbSqCw3MYv1dpBP7xw49VoD5bI3JExFU=; b=O1yd6TJ32WCGu7OE54ZMXGmvxdcgienalx2pPu46AD1u/Zf+7S9OakDCm+/5epTQIQ d+/cuCq7DnXv+3d25pUxQsOT1Ts2s6sgr5hkzL6HU3xegIH4QduNpOcmsowNVGI1V1Ui rSkFD2vNWtrzFgoryxBsVOMgyOTI66yPyGWTKEGdsmc76Ljktz/KCWMoq7rtH3zldqqA EqZAlEq9wcVTvYYEktx8/4yMmpHmbdzlrguqSEQcKJ0fSq3ch9WZ9qvIoak4U7dc8F0x 0dp5P3g+McX5Hjef3+MN9kivdy9DEBpZntTLzeK6L1jT9c81yf6DHCQawZYd8yyIgTqo o7ng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=FsL7Iz382nwBbSqCw3MYv1dpBP7xw49VoD5bI3JExFU=; b=hfL6Inc9GXc9u8ylL9VWw+nPm6QR7W5EXF8+fTffY/yce3gZGsO7AVpoaK+CXdRsEL tOoCEUoYcasAHrKW7Q9oeXXlRrJHhhV1d3c9pKY2CQ7entKnJ+JSUUav2SKAIFrU77QW cymA9ZczEXNsIW9r3fgEluv0NWX4roR9uOaw66L8z8D77EkGXwFuecH8H+9gC9u+X8Mz Bzes6eLDdWz/XA0dlNikV6LvN+wSIZHWLKxiZ7gOoiiE7DSDmLc0+ilkvfqRTsjxvQYS 31HnVH5NtlUVY3ssYzhTRqQy1ppDky3wWwLSG7FTCEnt+hLyAOWEzrbFTLeOW/v5kLqy /X8Q== X-Gm-Message-State: APjAAAUmlOhsXOtpKmopUuygXlB+xcFufJTmwZHguxW/tUwzYI0a+g2m YJXNtMjM+lyhaE1wr+MwolI6ZJWHDwetDziWClMlLZGEF6Q= X-Google-Smtp-Source: APXvYqz1bihE21VxdIeNRBq4VgUgao6KTZ13WLa7/+d5aKAortjIE/icwwNFc3/3hIj6oPq3nv/xZSczNvTZkmlcLcY= X-Received: by 2002:a05:600c:225a:: with SMTP id a26mr101463885wmm.81.1564369632307; Sun, 28 Jul 2019 20:07:12 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Mike Brooks Date: Sun, 28 Jul 2019 20:07:01 -0700 Message-ID: To: ZmnSCPxj , bitcoin-dev Content-Type: multipart/alternative; boundary="0000000000003b47e4058ec930f6" X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HTML_MESSAGE, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org X-Mailman-Approved-At: Mon, 29 Jul 2019 07:24:16 +0000 Subject: Re: [bitcoin-dev] PubRef - Script OP Code For Public Data References X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Bitcoin Protocol Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Jul 2019 03:07:15 -0000 --0000000000003b47e4058ec930f6 Content-Type: text/plain; charset="UTF-8" ZmnSCPxj, > Lightning uses a "short channel ID" which is basically an index of block number + index of transaction + index of output to refer to channels. Oh wow, this is very similar to the PUBREF proposal. In fact the OP_PUBREF4 operation could be modified to take the tuple: (block number, index of transaction, index of PUSHDATA) and it would be functionally equivalent. It looks like the construction of the short channel ID was chosen for the performance needed to resolve the lookup. > The problem with transaction being pruned is that the data in them might now be used in a *future* `OP_PUBREF`. I can see how pruning is needed for scalability, and pruning can be made compatible with a reference to a transaction. If a transaction is pruned, then the key material used in a prune'ed block's PUSHDATA operations are of no value. A user of the network shouldn't need to make this kind of PUBREF, and if a user did want to bring a wallet back from the dead - then the utility of PUBREF wouldn't be available to them. Best Regards, Mike On Sun, Jul 28, 2019 at 7:49 PM ZmnSCPxj wrote: > Good morning Mike, > > > I think that this implication affects other applications built on the > blockchain, not just the PubRef proposal: > > > > I believe not? > Current applications use txids to refer to previous transactions, so even > a short-ranged history rewrite will mostly not affect them --- they can > just rebroadcast the transactions they are spending and get those > reconfirmed again. > There is admittedly a risk of double-spending, but each individual > application can just spend deeply-confirmed transactions, and tune what it > considers "deeply-confirmed" depending on how large the value being spent > is. > The point is that history rewrites are costly, but if the value being put > in a `scriptPubKey` that uses `OP_PUBREF` is large enough, it may justify > the cost of history rewrites --- but if the value is small, the individual > application (which refers to transactions by their txid anyway) can > generally assume miners will not bother to history-rewrite. > > Since `OP_PUBREF` would be a consensus rule, we need to select a > "deeply-confirmed" point that is deep enough for *all* cases, unlike > applications **on top of the blockchain** which can tune their rule of > "deeply-confirmed" based on value. > Thus my suggestion to use 100, which we consider "deep enough" to risk > allowing miners to sell their coins. > > Lightning uses a "short channel ID" which is basically an index of block > number + index of transaction + index of output to refer to channels. > This is not a problem, however, even in case of short-ranged history > rewrites. > The short channel ID is only used for public routing. > Between the channel counterparties, no security is based on short channel > ID being stable; it just loses you potential routing fees from the channel > (and can be fixed by increasing your "deeply-confirmed" enough level before > you announce the channel for public routing). > > > > There is a potential for a targeted attack where a large payout going > to a `scriptPubKey` that uses `OP_PUBREF` on a recently-confirmed > transaction finds that recently-confirmed transaction is replaced with one > that pays to a different public key, via a history-rewrite attack. > > > Such an attack is doable by miners, and if we consider that we accept > 100 blocks for miner coinbase maturity as "acceptably low risk" against > miner shenanigans, then we might consider that 100 blocks might be > acceptable for this also. > > > Whether 100 is too high or not largely depends on your risk appetite. > > > > I agree 100% this attack is unexpected and very interesting. > > It is precisely because of this possibility that we tend to avoid making > SCRIPT validity dependent on anything that is not in the transaction. > We would have to re-evaluate the SCRIPT every time there is a chain tip > reorganization (increasing validation CPU load), unless we do something > like "only allow `OP_PUBREF` to data that is more than 100 blocks > confirmed". > > > However, I find the arbitrary '100' to be unsatisfying - I'll have to > do some more digging. It would be interesting to trigger this on the > testnet to see what happens. Do you know if anyone has pushed these > limits? I am so taken by this attack I might attempt it. > > > > > Data derived from > 220Gb of perpetually-growing blockchain is > hardly, to my mind, "only needs an array". > > > > There are other open source projects that have to deal with larger data > sets and have accounted for the real-world limits on computability. Apache > HTTPD's Bucket-Brigade comes to mind, which has been well tested and can > account for limited RAM when accessing linear data structures. For a more > general purpose utility leveldb (bsd-license) provides random access to > arbitrary data collections. > > Which is the point: we need to use something, the details need to be > considered during implementation, implementation details may leak in the > effective spec (e.g. DER-encoding), etc. > > > Pruning can also be a real asset for PubRef. If all transactions for a > wallet have been pruned, then there is no need to index this PubRef - a > validator can safely skip over it. > > What? > The problem with transaction being pruned is that the data in them might > now be used in a *future* `OP_PUBREF`. > > Further, pruned nodes are still full validators --- transactions may be > pruned, but the pruned node will ***still*** validate any `OP_PUBREF` it > uses, because it is still a full validator, it just does not archive old > blocks in local storage. > > Regards, > ZmnSCPxj > --0000000000003b47e4058ec930f6 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable

ZmnSCPxj,
=C2=A0=C2=A0
=
> Lightning uses a "short channel ID" which is basically = an index of block number + index of transaction + index of output to refer = to channels.

Oh wow, this is very similar to t= he PUBREF proposal. In fact the OP_PUBREF4 operation could be modified to t= ake the tuple: (block number, index of transaction, index of PUSHDATA) and = it would be functionally equivalent.=C2=A0 It looks like the construction o= f the short channel ID was chosen for the performance needed to resolve the= lookup.

=C2=A0 > The problem with transaction being= pruned is that the data in them might now be used in a *future* `OP_PUBREF= `.

I can see how pruning is needed for scalability, = and pruning=C2=A0can be made compatible with a reference to a transaction. = If a transaction is pruned, then the key material used in a prune'ed bl= ock's PUSHDATA operations are of no value.=C2=A0 A user of the network = shouldn't need to make this kind of PUBREF, and if a user did want to b= ring a wallet back from the dead - then the utility of PUBREF wouldn't = be available to them.

Best Regards,
Mike=


On Sun, Jul 28, 2019 at 7:49 PM ZmnSCPxj <ZmnSCPxj@protonmail.com> wrote:<= br>
Good morning Mik= e,

> =C2=A0I think that this implication affects other applications built o= n the blockchain, not just the PubRef proposal:
>

I believe not?
Current applications use txids to refer to previous transactions, so even a= short-ranged history rewrite will mostly not affect them --- they can just= rebroadcast the transactions they are spending and get those reconfirmed a= gain.
There is admittedly a risk of double-spending, but each individual applicat= ion can just spend deeply-confirmed transactions, and tune what it consider= s "deeply-confirmed" depending on how large the value being spent= is.
The point is that history rewrites are costly, but if the value being put i= n a `scriptPubKey` that uses `OP_PUBREF` is large enough, it may justify th= e cost of history rewrites --- but if the value is small, the individual ap= plication (which refers to transactions by their txid anyway) can generally= assume miners will not bother to history-rewrite.

Since `OP_PUBREF` would be a consensus rule, we need to select a "deep= ly-confirmed" point that is deep enough for *all* cases, unlike applic= ations **on top of the blockchain** which can tune their rule of "deep= ly-confirmed" based on value.
Thus my suggestion to use 100, which we consider "deep enough" to= risk allowing miners to sell their coins.

Lightning uses a "short channel ID" which is basically an index o= f block number + index of transaction + index of output to refer to channel= s.
This is not a problem, however, even in case of short-ranged history rewrit= es.
The short channel ID is only used for public routing.
Between the channel counterparties, no security is based on short channel I= D being stable; it just loses you potential routing fees from the channel (= and can be fixed by increasing your "deeply-confirmed" enough lev= el before you announce the channel for public routing).

> =C2=A0> There is a potential for a targeted attack where a large pa= yout going to a `scriptPubKey` that uses `OP_PUBREF` on a recently-confirme= d transaction finds that recently-confirmed transaction is replaced with on= e that pays to a different public key, via a history-rewrite attack.
> =C2=A0> Such an attack is doable by miners, and if we consider that= we accept 100 blocks for miner coinbase maturity as "acceptably low r= isk" against miner shenanigans, then we might consider that 100 blocks= might be acceptable for this also.
> =C2=A0> Whether 100 is too high or not largely depends on your risk= appetite.
>
> I agree 100% this attack is unexpected and very interesting.

It is precisely because of this possibility that we tend to avoid making SC= RIPT validity dependent on anything that is not in the transaction.
We would have to re-evaluate the SCRIPT every time there is a chain tip reo= rganization (increasing validation CPU load), unless we do something like &= quot;only allow `OP_PUBREF` to data that is more than 100 blocks confirmed&= quot;.

>=C2=A0 However, I find the arbitrary '100' to be unsatisfying -= I'll have to do some more digging. It would be interesting to trigger = this on the testnet to see what happens.=C2=A0 Do you know if anyone has pu= shed these limits?=C2=A0 I am so taken by this attack I might attempt it. >
> =C2=A0> Data derived from > 220Gb of perpetually-growing blockch= ain is hardly, to my mind, "only needs an array".
>
> There are other open source projects that have to deal with larger dat= a sets and have accounted for the real-world limits on computability. Apach= e HTTPD's Bucket-Brigade comes to mind, which has been well tested and = can account for limited RAM when accessing linear data structures. For a mo= re general purpose utility leveldb (bsd-license) provides random access to = arbitrary data collections.

Which is the point: we need to use something, the details need to be consid= ered during implementation, implementation details may leak in the effectiv= e spec (e.g. DER-encoding), etc.

>=C2=A0 Pruning can also be a real asset for PubRef.=C2=A0 If all transa= ctions for a wallet have been pruned, then there is no need to index this P= ubRef - a validator can safely skip over it.

What?
The problem with transaction being pruned is that the data in them might no= w be used in a *future* `OP_PUBREF`.

Further, pruned nodes are still full validators --- transactions may be pru= ned, but the pruned node will ***still*** validate any `OP_PUBREF` it uses,= because it is still a full validator, it just does not archive old blocks = in local storage.

Regards,
ZmnSCPxj
--0000000000003b47e4058ec930f6--