public inbox for bitcoindev@googlegroups.com
 help / color / mirror / Atom feed
From: Mike Brooks <m@ib.tc>
To: ZmnSCPxj <ZmnSCPxj@protonmail.com>,
	 bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org>
Subject: Re: [bitcoin-dev] PubRef - Script OP Code For Public Data References
Date: Sun, 28 Jul 2019 20:07:01 -0700	[thread overview]
Message-ID: <CALFqKjT=6aYGqTkzqcf2c5dwApPDTTVGuyGt=Vxi6Ms43m-4nQ@mail.gmail.com> (raw)
In-Reply-To: <dNlv6xp0FTd29mczIiw3uOIoDYcTMV44Az0UZ53XQs_8PGzmnMIBhses0kU_SpOZGUcNzWRlerJEZm-p3YQ8USNIxPBouVZdWYCa7HcxDG0=@protonmail.com>

[-- Attachment #1: Type: text/plain, Size: 5712 bytes --]

ZmnSCPxj,

> Lightning uses a "short channel ID" which is basically an index of block
number + index of transaction + index of output to refer to channels.

Oh wow, this is very similar to the PUBREF proposal. In fact the OP_PUBREF4
operation could be modified to take the tuple: (block number, index of
transaction, index of PUSHDATA) and it would be functionally equivalent.
It looks like the construction of the short channel ID was chosen for the
performance needed to resolve the lookup.

  > The problem with transaction being pruned is that the data in them
might now be used in a *future* `OP_PUBREF`.

I can see how pruning is needed for scalability, and pruning can be made
compatible with a reference to a transaction. If a transaction is pruned,
then the key material used in a prune'ed block's PUSHDATA operations are of
no value.  A user of the network shouldn't need to make this kind of
PUBREF, and if a user did want to bring a wallet back from the dead - then
the utility of PUBREF wouldn't be available to them.

Best Regards,
Mike


On Sun, Jul 28, 2019 at 7:49 PM ZmnSCPxj <ZmnSCPxj@protonmail.com> wrote:

> Good morning Mike,
>
> >  I think that this implication affects other applications built on the
> blockchain, not just the PubRef proposal:
> >
>
> I believe not?
> Current applications use txids to refer to previous transactions, so even
> a short-ranged history rewrite will mostly not affect them --- they can
> just rebroadcast the transactions they are spending and get those
> reconfirmed again.
> There is admittedly a risk of double-spending, but each individual
> application can just spend deeply-confirmed transactions, and tune what it
> considers "deeply-confirmed" depending on how large the value being spent
> is.
> The point is that history rewrites are costly, but if the value being put
> in a `scriptPubKey` that uses `OP_PUBREF` is large enough, it may justify
> the cost of history rewrites --- but if the value is small, the individual
> application (which refers to transactions by their txid anyway) can
> generally assume miners will not bother to history-rewrite.
>
> Since `OP_PUBREF` would be a consensus rule, we need to select a
> "deeply-confirmed" point that is deep enough for *all* cases, unlike
> applications **on top of the blockchain** which can tune their rule of
> "deeply-confirmed" based on value.
> Thus my suggestion to use 100, which we consider "deep enough" to risk
> allowing miners to sell their coins.
>
> Lightning uses a "short channel ID" which is basically an index of block
> number + index of transaction + index of output to refer to channels.
> This is not a problem, however, even in case of short-ranged history
> rewrites.
> The short channel ID is only used for public routing.
> Between the channel counterparties, no security is based on short channel
> ID being stable; it just loses you potential routing fees from the channel
> (and can be fixed by increasing your "deeply-confirmed" enough level before
> you announce the channel for public routing).
>
> >  > There is a potential for a targeted attack where a large payout going
> to a `scriptPubKey` that uses `OP_PUBREF` on a recently-confirmed
> transaction finds that recently-confirmed transaction is replaced with one
> that pays to a different public key, via a history-rewrite attack.
> >  > Such an attack is doable by miners, and if we consider that we accept
> 100 blocks for miner coinbase maturity as "acceptably low risk" against
> miner shenanigans, then we might consider that 100 blocks might be
> acceptable for this also.
> >  > Whether 100 is too high or not largely depends on your risk appetite.
> >
> > I agree 100% this attack is unexpected and very interesting.
>
> It is precisely because of this possibility that we tend to avoid making
> SCRIPT validity dependent on anything that is not in the transaction.
> We would have to re-evaluate the SCRIPT every time there is a chain tip
> reorganization (increasing validation CPU load), unless we do something
> like "only allow `OP_PUBREF` to data that is more than 100 blocks
> confirmed".
>
> >  However, I find the arbitrary '100' to be unsatisfying - I'll have to
> do some more digging. It would be interesting to trigger this on the
> testnet to see what happens.  Do you know if anyone has pushed these
> limits?  I am so taken by this attack I might attempt it.
> >
> >  > Data derived from > 220Gb of perpetually-growing blockchain is
> hardly, to my mind, "only needs an array".
> >
> > There are other open source projects that have to deal with larger data
> sets and have accounted for the real-world limits on computability. Apache
> HTTPD's Bucket-Brigade comes to mind, which has been well tested and can
> account for limited RAM when accessing linear data structures. For a more
> general purpose utility leveldb (bsd-license) provides random access to
> arbitrary data collections.
>
> Which is the point: we need to use something, the details need to be
> considered during implementation, implementation details may leak in the
> effective spec (e.g. DER-encoding), etc.
>
> >  Pruning can also be a real asset for PubRef.  If all transactions for a
> wallet have been pruned, then there is no need to index this PubRef - a
> validator can safely skip over it.
>
> What?
> The problem with transaction being pruned is that the data in them might
> now be used in a *future* `OP_PUBREF`.
>
> Further, pruned nodes are still full validators --- transactions may be
> pruned, but the pruned node will ***still*** validate any `OP_PUBREF` it
> uses, because it is still a full validator, it just does not archive old
> blocks in local storage.
>
> Regards,
> ZmnSCPxj
>

[-- Attachment #2: Type: text/html, Size: 6417 bytes --]

  reply	other threads:[~2019-07-29  3:07 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-19  6:05 [bitcoin-dev] PubRef - Script OP Code For Public Data References Mike Brooks
2019-07-19 15:17 ` William Casarin
2019-07-19 19:17   ` Pieter Wuille
2019-07-19 17:45 ` Yuval Kogman
     [not found]   ` <CALFqKjTHT7XaREK7ewwKKBjrZcty7ueNBMtSLEW7B-o9uwXgmw@mail.gmail.com>
2019-07-19 22:48     ` Yuval Kogman
2019-07-24 19:49       ` Mike Brooks
2019-07-19 18:07 ` ZmnSCPxj
2019-07-27 20:03   ` Mike Brooks
2019-07-29  1:46     ` ZmnSCPxj
2019-07-29  2:19       ` Mike Brooks
2019-07-29  2:49         ` ZmnSCPxj
2019-07-29  3:07           ` Mike Brooks [this message]
2019-07-29  3:39             ` ZmnSCPxj

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CALFqKjT=6aYGqTkzqcf2c5dwApPDTTVGuyGt=Vxi6Ms43m-4nQ@mail.gmail.com' \
    --to=m@ib.tc \
    --cc=ZmnSCPxj@protonmail.com \
    --cc=bitcoin-dev@lists.linuxfoundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox