From: Anthony Towns <aj@erisian.com.au>
To: Michael Folkson <michaelfolkson@protonmail.com>,
Bitcoin Protocol Discussion
<bitcoin-dev@lists.linuxfoundation.org>
Subject: Re: [bitcoin-dev] bitcoin-inquistion: evaluating soft forks on signet
Date: Tue, 4 Oct 2022 08:54:04 +1000 [thread overview]
Message-ID: <YztoDOBOxJIZP14k@erisian.com.au> (raw)
In-Reply-To: <QCgAXgQnala8tj6931-FkV7y_E3_PccGjvWziGoY6QG7hF1RZfiM2Zh3luKTtnnpn_DgjjxzDlMESO1DO06zyr1Ykl_eUrQKLqe4g8wmRqQ=@protonmail.com>
On Sun, Oct 02, 2022 at 03:25:19PM +0000, Michael Folkson via bitcoin-dev wrote:
> I'm also perfectly happy with the status quo of the default signet
> having block signers and gatekeepers for soft forks activated on the
> default signet. I'm more concerned with those gatekeepers being under
> pressure to merge unfinished, buggy soft fork proposals on the default
> signet which need to be reversed or changed disrupting all default
> signet users.
First, I think it's far better for signet miners to be under that pressure
than either mainnet miners or maintainers/devs of bitcoin core. Or for
that matter, users of bitcoin who are just trying to use bitcoin and
not lose their money to bank confiscation or central bank hyperinflation.
That's where we stand today, whether you look solely at historical
precedent (cltv, csv, segwit were only testable on blockstream's elements
alpha prior to being merged into core, and combined with confidential
assets, that's not really a 1:1 test environment; taproot wasn't really
testable anywhere prior to being merged into core), or you consider the
focus of people actively trying to get forks deployed currently (ctv
has been pushing for a merge [0], and considered trying to get users
and miners to adopt it [1]; likewise the great consensus cleanup first
proposed a PR for core [2] before posting a bip draft [3] and progress
stopped when the PR didn't move forwards; likewise drivechains/bip300's
current deployment approach is "do a uasf on mainnet"); or see sentiment
such as [4].
[0] https://www.erisian.com.au/bitcoin-core-dev/log-2022-01-13.html#l-490
https://rubin.io/bitcoin/2021/12/24/advent-27/
[1] https://rubin.io/bitcoin/2022/04/17/next-steps-bip119/
[2] https://github.com/bitcoin/bitcoin/pull/15482
[3] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016714.html
[4] https://twitter.com/CobraBitcoin/status/1570380739010793479
It's *great* that core maintainers, reviewers, devs, URSF advocates, etc
are able to resist pressure to merge bad things; what's not great is
directing the time and attention of researchers and devs and businesses
who are trying to come up with good things for bitcoin at something that
doesn't encourage useful forward progress.
But second, APO and CTV aren't being kept out of core because they're
"unfinished and buggy" per se (which isn't to say they aren't buggy or
shouldn't be kept out for that reason); at least in my view, they're
being kept out because of a combination of (a) it's not clear that
they're desirable to deploy on mainnet (whether at all, or in comparison
to other ways of obtaining similar functionality); and (b) prioritising
reducing risk on mainnet, vs improving the ability to test new ideas
outside of mainnet.
Bugs are much easier to deal with in comparison: you put a bunch of
testing/dev effort in to figure out what bugs there might be, then you
analyse them, then you fix them. If it were just a matter of finding
and fixing bugs, that's still hard, sure, but it's something we know
how to do.
It's the broader questions that are trickier: eg, do we want CTV first, or
CTV+APO at the same time, or just APO first? do we want some subtle tweaks
to CTV or APO rules to make them better? do we want OP_TXHASH or OP_TX
or some other variant instead? do we want to skip the intermediate steps
and go straight to simplicity/lisp? do we want to never have anything
that may risk covenant-like behaviour ever? Without even an idea how to
get answers to those, it's not clear that it even makes sense to spend
the time working on finding/fixing obscure implementation bugs in the
proposals.
(Ultimately, in my opinion, it's the same thing with drivechains and the
great consensus cleanup: are these ideas sensible to deploy on mainnet? If
the answer to that were a clear yes for either of them, then it would
make sense to work on merging them in core and activating on mainnet;
but at least to me, it's not clear whether the answer should be yes,
yes after some specific set of changes, or no. Nor is it clear what work
would help produce a clear answer)
I think breaking the loop there is helpful: get these ideas out on
signet, where finding and fixing bugs does matter and is worth doing, but
where you *don't* have to deal with deep existential questions because
you're not messing with a multi billion dollar system and committing to
supporting the feature for the entire future of humanity.
Then, if there are alternative approaches that people think might be
better, get them out on signet too so that you can do apples-to-apples
comparisons: see how much code they are to actually implement, how
convenient they are to build on, whether there are any unexpected
differences between theory and practice, etc. Then you can build up real
answers to "is this a sensible thing to deploy on mainnet?"
For that, to get things onto signet you really only need to establish:
* it's interesting enough to be worth spending time on
* it's gone through a decent level of review and there are no known
bugs
* it doesn't conflict too heavily with the other interesting changes
we'd like to look at
and as a result you get to see the change in a production-like
environment, and can use that to help get better answers to the deeper,
harder questions.
There's definitely some threshold where a proposed soft fork would be
too much effort to add to inquisition -- perhaps that comes with adding
something like Simplicity ("227 changed files with 72,617 additions"
[5]), or perhaps it would be doing something like confidential assets
which is both intrusive and perhaps undesirable for mainnet deployment,
or perhaps it's just numbers: I had to adjust the APO patches to rebase
them on top of CTV; doing that N-1 times (in perhaps N**2 locations?) for
N soft forks will no doubt get tedious as N increases -- so maybe only
merging the "top 10" proposals in any six month period would make sense? I
don't really see the problem with crossing that bridge when we come to
it though.
[5] https://github.com/ElementsProject/elements/compare/simplicity
I guess I don't really mind if it's just me and Kalle deciding what the
"top 10" proposals are, or deciding at what point additional PRs get
too hard to merge. But in my ideal world, we'd have multiple devs and
researchers reviewing PRs in the inquisition repo, and as the ones doing
the work, it would make sense for them also to be the ones deciding what
projects are the most interesting and worth spending that effort on, and
thus which proposals are included and which ones aren't. At least that
way wannabe gatekeepers have to at least contribute useful review effort.
> Right but disruption isn't boolean, it is a spectrum. It isn't
> disruption or zero disruption. The more soft fork proposals that are
> enabled on the default signet (and the more changes to those soft fork
> proposals pushed to the default signet) the higher the risk of a stalling
> blockchain
Like I said, I believe PR#7 makes that particular risk negligible (ie,
for people following signet with bitcoin core, the risk of a stalling
chain is no greater than it would be if all the signet miners were also
only running bitcoin core).
But you're right, it is a spectrum: eg, there's also the risk that
a bug in one soft fork interferes with testing another soft fork
(perhaps core nodes see signet continuing to add blocks, but inquisition
nodes do not, because the inquisition node's getblocktemplate resulted
in a block that core accepts but inquisition rejects). There's three
potential ways of mitigating that risk:
* finding bugs like that during review, before merging the code, let
alone running it
* quickly noticing such bugs, and reorging blocks that trigger them out
* using the -renounce feature of bitcoin-inquisition to temporarily
disable enforcing a buggy soft fork, until a fix can be merged and
deployed
But that risk only affects people following signet using an inquisition
node, and occurs whether or not it's a shared chain with bitcoin core
nodes. I'd hope that we can have good enough review that consensus bugs
are pretty rare in practice; but in the event that we do have them,
probably better that inquisition nodes do fail in obvious ways, so that
the bugs get noticed quickly and fixed.
> "The linux-next tree is the holding area for patches aimed at the next kernel merge window."
> I guess I'd also want expectations to be tempered a little for consensus changes on bitcoin-inquisition versus say this description of linux-next.
I think you're misinterpreting that description. "aimed at" doesn't
mean "will be accepted during", and more importantly, linux-next
is just an inspiration, not a template to follow literally. Anyway,
https://lwn.net/Articles/287155/ might be a better jumping off point if
you're interested in that rabbit hole.
> I'd like to avoid the "my soft fork proposal has been activated on
> the default signet so you should expect it to be activated on mainnet
> within x months or y years" type thing.
Like I said: this is a way to improve the "evaluation phase". Think of
it like the proposal being a kid sitting an exam; that they sit the exam
doesn't mean they're going to get an A+, even if you already have to do
a lot of work to sit the exam in the first place.
I think the ideal result from a soft fork proposal evaluation would be:
- this is the explicit proposal [bipN], here are the corresponding
changes to the code [PR#N]
- the performance impact on validators/miners of this change is [p]
so in the context of the applications mentioned above, that's [x.p,
y.p, z.p]. you can observe worst case performance under normal
conditions (where relay rules apply) by looking at signet blocks
[a, b, c]; worst case performance if a miner is attacking (using
non-standard transactions) may look like [d].
- people have come up with other alternative ideas [x, y]. this
proposal is superior to [x] because of [objective reason],
and superior to [y] because [when we tried it, y turned out to
be too annoying to implement/use].
- here are real, functioning examples of useful, new/improved
applications that you can build with this feature. if it were activated
on mainnet, they could be deployed on day 1, and see real use: [x,
y, z]
We've been pretty good at the first two already; it's the second two
that I think are holding back current proposals, and that this would help
improve. At least for me, an "A+" answer to all of the above would cause
me to advocate for a proposal to be deployed on mainnet. My concept of
an A+ answer here is "this is such a good idea that it's now obvious to
essentially everyone, and there's no meaningful debate left to have".
A "B" answer, where, say, applications using the feature exist, but
don't seem very interesting or valuable is also possible; I'd think
that's a "needs improvement" result, where maybe you go back and try
to come up with a better proposal that enables more useful results,
rather than trying to get it deployed on mainnet. A "B" answer still
leaves open the question of "is there really a point? changes are risky,
and signet's not going to test every possible scenario..."
Having the outcome of an evaluation be an "F" for fail is also useful
-- maybe it turns out that despite a bunch of people thinking CTV
or drivechains are cool, that they do make it too easy to destroy
everything. In that case, having an objective demonstration of the
failure mode is a great outcome of an evaluation process: it allows us
to say "sorry, it's a waste of time working on this; you'll need to come
up with an entirely new approach that avoids this flaw" and have R&D
effort spent on useful things instead. Far better that than not giving
an answer and letting people assume "oh, we just need to hire someone
full time to advocate and shepherd the proposal" and spend more
R&D effort on a dead end.
(In the event that a proposed soft fork that gets added to inquisition
enables interesting/non-obvious miner-only attacks -- drivechains
maybe? -- I think I'd be open to the idea of manually mining some
non-standard signet blocks in order to crystallize what that sort of
attack might look like)
I'd say the "length of time" thing should look more like:
- here's our awesome idea, isn't it exciting?
- wow, people really are excited, let's implement it and deploy it
on signet!
- great, it's been on signet for a while: here's the applications
people have built using our idea: you should have a look!
- it seems like we've resolved all the issues, and people are pretty
excited about using the new apps with real money, let's deploy it
on mainnet
that is "it's been on signet a long time" is more about "here are the
apps that people have developed in that time" and "here's the adversarial
analysis people have done over that period to see if the idea is safe
or not". Whether something gets deployed on mainnet is more a question
of "are these apps actually valuable", "have the risks been thoroughly
explored and minimised", and "have alternatives been explored". If the
answer to some/all of those is still "no", then having had a long time
for that work to happen is probably more a negative than a positive...
Cheers,
aj
prev parent reply other threads:[~2022-10-03 22:54 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-16 7:15 [bitcoin-dev] bitcoin-inquistion: evaluating soft forks on signet Anthony Towns
2022-09-16 16:46 ` Matt Corallo
2022-09-17 6:14 ` Anthony Towns
2022-09-17 8:39 ` Matt Corallo
2022-09-17 15:53 ` Michael Folkson
2022-09-18 12:27 ` alicexbt
2022-09-18 18:44 ` Michael Folkson
2022-09-18 18:47 ` Antoine Riard
2022-09-19 10:05 ` Anthony Towns
2022-09-28 11:48 ` Michael Folkson
2022-09-28 20:01 ` alicexbt
2022-10-02 4:06 ` Anthony Towns
2022-10-02 6:12 ` Anthony Towns
2022-10-02 15:25 ` Michael Folkson
2022-10-03 22:54 ` Anthony Towns [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YztoDOBOxJIZP14k@erisian.com.au \
--to=aj@erisian.com.au \
--cc=bitcoin-dev@lists.linuxfoundation.org \
--cc=michaelfolkson@protonmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox