* [bitcoin-dev] On the regularity of soft forks
@ 2021-10-11 12:24 Michael Folkson
2021-10-11 19:12 ` Jeremy
0 siblings, 1 reply; 13+ messages in thread
From: Michael Folkson @ 2021-10-11 12:24 UTC (permalink / raw)
To: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 7662 bytes --]
I was hoping to delay this post as long as possible because there are so many interesting and important things to discuss other than soft forks and consensus changes that seem to have taken a backseat this year due to Taproot activation. In addition, there seems to be a world of opportunity to leverage the upcoming Taproot soft fork that risks getting drowned out by speculating on the next soft fork.
There is clearly nothing wrong with some individuals continuously working on, designing and refining new possible consensus changes and whoever is interested is free to follow and participate in those discussions. This is Bitcoin, no one let alone me can decide what people should focus on. Indeed I intend to allocate a portion of my time to following and understanding the trade-offs of current and future soft fork proposals. However, in this post I will argue against frequent soft forks with a single or minimal set of features and instead argue for infrequent soft forks with batches of features.
I fully understand the desire and motivation to get consensus changes into Bitcoin as quickly as possible when certain use cases depend on them. However, the robustness, security and ability to resist harmful or suboptimal changes to the system is clearly the ultimate priority. The more frequently soft forks are attempted the harder it is for the community to ensure harmful or suboptimal changes don’t creep into the consensus rules. I am well aware of how much community mindshare Taproot activation demanded this year. This is how it should be. The community should be informed and vigilant when the consensus rules are changed. Every full node will either immediately on activation or in future enforce these consensus rule changes and so it is in the interests of every full node operator that these changes have been subject to the ultimate levels of community review and rigorous testing. Attempting them frequently either requires continuous community monitoring or an acceptance that an unneeded or harmful consensus change could easily creep into Bitcoin. Neither of these options seem acceptable to me. It is not reasonable to ask all the different segments of the community to dedicate full time resources to stay on top of proposed consensus changes. Hence treating a pull request to a Bitcoin implementation that requires a soft fork like any other pull request is shortsighted.
Merging soft fork code into a Bitcoin implementation
The code for a soft fork should not be merged into a Bitcoin implementation, let alone activation parameters (discussed later), until the maintainers of that implementation are comfortable that the entirety of that soft fork has sufficient community consensus. This includes what many consider the reference implementation and dominant implementation on the network, Bitcoin Core. A soft fork pull request cannot and should not be treated like any other pull request which can be merged with anything from 1 to 10 ACKs from long term or newer contributors. The act of merging a pull request that is part of a proposed soft fork is an acknowledgement by the maintainer(s) of that implementation that they consider the entirety of that proposed soft fork to have community consensus. That includes what is included in that soft fork as well as what isn’t. If there is a prevailing view that the current design could be improved, could feasibly be replaced by something superior in future or merely hasn’t been subject to sufficient community review it should not be merged.
Of course there is no ultimate authority to enforce that this happens, Bitcoin is an entirely voluntary system. A contentious or disputed soft fork can be merged into a Bitcoin implementation at any time but doing this is opening the door to the schism, disruption and waste of developer hours that we saw in 2017. Personally I think we’ll see an attempt to activate a contentious soft fork at some point in the long term future (Murphy’s Law) but any attempt to do so should be strongly discouraged. It should be made clear to any individual(s) that attempt this of the knock on impacts and potential short term damage they are inflicting on the entire ecosystem. Longer term I have confidence in Bitcoin’s ability to survive whatever happens but allocating significant community resources to resist an unnecessary contentious soft fork (or even regular contentious soft forks) is not an optimal use of those resources.
Soft fork activation
Miner signaling is a tool for signaling readiness. It is not voting for the soft fork or expressing support for the soft fork. There should not be any attempt to facilitate miner signaling until there is sufficient community consensus (the mining community is a subset of the community) on the soft fork. Merging activation parameters or encouraging miner signaling before it is clear there is community consensus on the entirety of the soft fork is putting the cart before the horse.
Taproot showed it was possible through the sustained efforts of many individuals and many organizations to achieve overwhelming community consensus for a soft fork. It is obviously impossible to get 100 percent consensus but Taproot appeared to get close to that. I did not identify any resistance whatsoever to merging Taproot PRs or the objective of getting Taproot activated although there was one long term contributor who effectively NACKed Taproot based on quantum resistance concerns.
Activation method and activation parameters ended up being more challenging to obtain overwhelming community consensus. Although I and a number of others participated in multiple open IRC meetings and spent months on IRC trying to find a way to get Taproot activated with at least rough consensus a number of disagreements remain. I don’t think these are necessarily showstoppers for future soft forks and assuming Taproot activates safely next month they ended up not being showstoppers for Taproot. However, it is clear the bar that was achieved regarding community consensus for the Taproot soft fork wasn’t met for the activation method and activation parameters. In a world where there isn’t overwhelming community consensus on the activation method, the activation parameters and what to do if the first activation attempt fails we have to accept that soft fork activations contain downside risk on top of the already acknowledged risks of bugs, consensus divergences and botched implementations of soft fork features. To layer on top a level of uncertainty over whether there is community consensus for the actual soft fork seems unacceptable to me.
This is an important additional argument for infrequent soft forks with batches of features rather than frequent soft forks with a single feature. If there is a chain split risk every time you attempt a soft fork you should not casually attempt a soft fork frequently or with abandon. There has to be community consensus that the upsides of the soft fork are sufficient to take on these downside risks of disruption or worse chain splits. I was of the strong personal view that the upsides outweighed the downside risks for Taproot activation in 2021 but this is a judgment we as a community will have to make for each and every future proposed soft fork. It is easy to get excited about shiny new features. It is harder to ensure harmful or suboptimal changes don’t creep into the consensus rules and harder yet to minimize the risk of chain splits if soft forks are attempted frequently.
--
Michael Folkson
Email: michaelfolkson at protonmail.com
Keybase: michaelfolkson
PGP: 43ED C999 9F85 1D40 EAF4 9835 92D6 0159 214C FEE3
[-- Attachment #2: Type: text/html, Size: 9042 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [bitcoin-dev] On the regularity of soft forks
2021-10-11 12:24 [bitcoin-dev] On the regularity of soft forks Michael Folkson
@ 2021-10-11 19:12 ` Jeremy
2021-10-11 19:53 ` ZmnSCPxj
2021-10-14 23:52 ` Anthony Towns
0 siblings, 2 replies; 13+ messages in thread
From: Jeremy @ 2021-10-11 19:12 UTC (permalink / raw)
To: Michael Folkson, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 2780 bytes --]
*> ... in this post I will argue against frequent soft forks with a single
or minimal*
*> set of features and instead argue for infrequent soft forks with batches*
*> of features.*
I think this type of development has been discussed in the past and has
been rejected.
from: Matt Corallo's post:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-January/017547.html
*Matt: Follow the will of the community, irrespective of individuals
orunreasoned objection, but without ever overruling any
reasonableobjection. Recent history also includes "objection" to soft forks
in theform of "this is bad because it doesn't fix a different problem I
wantfixed ASAP". I don't think anyone would argue this qualifies as
areasonable objection to a change, and we should be in a place, as
acommunity (never as developers or purely one group), to ignore
suchobjections and make forward progress in spite of them. We don't
makegood engineering decisions by "bundling" unrelated features together to*
*enable political football and compromise.*
*AJ: - improvements: changes might not make everyone better off, but we*
* don't want changes to screw anyone over either -- pareto improvements
in economics, "first, do no harm", etc. (if we get this right, there's no
need to make compromises and bundle multiple flawed proposals so that
everyone's an equal mix of happy and*
* miserable)*
I think Matt and AJ's PoV is widely reflected in the community that
bundling changes leads to the inclusion of suboptimal features.
This also has strong precedent in other important technical bodies, e.g.
from https://datatracker.ietf.org/doc/html/rfc7282 On Consensus and Humming
in the IETF.
Even worse is the "horse-trading" sort of compromise: "I object to
your proposal for such-and-so reasons. You object to my proposal for
this-and-that reason. Neither of us agree. If you stop objecting to
my proposal, I'll stop objecting to your proposal and we'll put them
both in." That again results in an "agreement" of sorts, but instead
of just one outstanding unaddressed issue, this sort of compromise
results in two, again ignoring them for the sake of expedience.
These sorts of "capitulation" or "horse-trading" compromises have no
place in consensus decision making. In each case, a chair who looks
for "agreement" might find it in these examples because it appears
that people have "agreed". But answering technical disagreements is
what is needed to achieve consensus, sometimes even when the people
who stated the disagreements no longer wish to discuss them.
If you would like to advocate bitcoin development run counter to that,
you should provide a much stronger refutation of these engineering
norms.
[-- Attachment #2: Type: text/html, Size: 6743 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [bitcoin-dev] On the regularity of soft forks
2021-10-11 19:12 ` Jeremy
@ 2021-10-11 19:53 ` ZmnSCPxj
2021-10-14 23:52 ` Anthony Towns
1 sibling, 0 replies; 13+ messages in thread
From: ZmnSCPxj @ 2021-10-11 19:53 UTC (permalink / raw)
To: Jeremy, Bitcoin Protocol Discussion
Good morning Jeremy,
> This also has strong precedent in other important technical bodies, e.g. from https://datatracker.ietf.org/doc/html/rfc7282 On Consensus and Humming in the IETF.
>
> Even worse is the "horse-trading" sort of compromise: "I object to
> your proposal for such-and-so reasons. You object to my proposal for
> this-and-that reason. Neither of us agree. If you stop objecting to
> my proposal, I'll stop objecting to your proposal and we'll put them
> both in." That again results in an "agreement" of sorts, but instead
> of just one outstanding unaddressed issue, this sort of compromise
> results in two, again ignoring them for the sake of expedience.
>
> These sorts of "capitulation" or "horse-trading" compromises have no
> place in consensus decision making. In each case, a chair who looks
> for "agreement" might find it in these examples because it appears
> that people have "agreed". But answering technical disagreements is
> what is needed to achieve consensus, sometimes even when the people
>
> who stated the disagreements no longer wish to discuss them.
>
> If you would like to advocate bitcoin development run counter to that, you should provide a much stronger refutation of these engineering norms.
The Internet has the maxim "be strict in what you provide, lenient in what you accept", which allows for slight incompatibilities between software to generally be papered over (xref the mountains of Javascript code that shim in various new ECMAScript features fairly reliably in a wide variety of browsers).
Bitcoin, as a consensus system, requires being paranoiacally strict on what transactions and blocks you accept.
Thus, the general engineering norm of separating concerns, of great application to "lenient in what you accept" systems, may not apply quite as well to "hell no I am not accepting that block" Bitcoin.
Bitcoin as well, as a resistance against state moneys, is inherently political, and it possible that the only way out is through: we may need to resist this horse-trading by other means than separating concerns, including political will to reject capitulation despite bundling.
Regards,
ZmnSCPxj
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [bitcoin-dev] On the regularity of soft forks
2021-10-11 19:12 ` Jeremy
2021-10-11 19:53 ` ZmnSCPxj
@ 2021-10-14 23:52 ` Anthony Towns
2021-10-15 0:43 ` micaroni
1 sibling, 1 reply; 13+ messages in thread
From: Anthony Towns @ 2021-10-14 23:52 UTC (permalink / raw)
To: Jeremy, Bitcoin Protocol Discussion
On Mon, Oct 11, 2021 at 12:12:58PM -0700, Jeremy via bitcoin-dev wrote:
> > ... in this post I will argue against frequent soft forks with a single or
> minimal
> > set of features and instead argue for infrequent soft forks with batches
> > of features.
> I think this type of development has been discussed in the past and has been
> rejected.
> AJ: - improvements: changes might not make everyone better off, but we
> don't want changes to screw anyone over either -- pareto
> improvements in economics, "first, do no harm", etc. (if we get this
> right, there's no need to make compromises and bundle multiple
> flawed proposals so that everyone's an equal mix of happy and
> miserable)
I don't think your conclusion above matches my opinion, for what it's
worth.
If you've got two features, A and B, where the game theory is:
If A happens, I'm +100, You're -50
If B happens, I'm -50, You're +100
then even though A+B is +50, +50, then I do think the answer should
generally be "think harder and come up with better proposals" rather than
"implement A+B as a bundle that makes us both +50".
_But_ if the two features are more like:
If C happens, I'm +100, You're +/- 0
If D happens, I'm +/- 0, You're +100
then I don't have a problem with bundling them together as a single
simultaneous activation of both C and D.
Also, you can have situations where things are better together,
that is:
If E happens, we're both at +100
If F happens, we're both at +50
If E+F both happen, we're both at +9000
In general, I think combining proposals when the combination is better
than the individual proposals were is obviously good; and combining
related proposals into a single activation can be good if it is easier
to think about the ideas as a set.
It's only when you'd be rejecting the proposal on its own merits that
I think combining it with others is a bad idea in principle.
For specific examples, we bundled schnorr, Taproot, MAST, OP_SUCCESSx
and CHECKSIGADD together because they do have synergies like that; we
didn't bundle ANYPREVOUT and graftroot despite the potential synergies
because those features needed substantially more study.
The nulldummy soft-fork (bip 147) was deployed concurrently with
the segwit soft-fork (bip 141, 143), but I don't think there was any
particular synergy or need for those things to be combined, it just
reduced the overhead of two sets of activation signalling to one.
Note that the implementation code for nulldummy had already been merged
and were applied as relay policy well before activation parameters were
defined (May 2014 via PR#3843 vs Sep 2016 for PR#8636) let alone becoming
an active soft fork.
Cheers,
aj
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [bitcoin-dev] On the regularity of soft forks
2021-10-14 23:52 ` Anthony Towns
@ 2021-10-15 0:43 ` micaroni
2021-10-16 11:02 ` Michael Folkson
0 siblings, 1 reply; 13+ messages in thread
From: micaroni @ 2021-10-15 0:43 UTC (permalink / raw)
To: Anthony Towns, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 3532 bytes --]
Interesting discussion. Correct me if I'm wrong: but putting too many
features together in one shot just can't make things harder to debug in
production if something very unexpected happens. It's a basic principle of
software engineering.
Change. Deploy. Nothing bad happened? Change it a little more. Deployment.
Or: Change, change, change. Deploy. Did something bad happen? What change
caused the problem?
On Thu, Oct 14, 2021 at 8:53 PM Anthony Towns via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> On Mon, Oct 11, 2021 at 12:12:58PM -0700, Jeremy via bitcoin-dev wrote:
> > > ... in this post I will argue against frequent soft forks with a
> single or
> > minimal
> > > set of features and instead argue for infrequent soft forks with
> batches
> > > of features.
> > I think this type of development has been discussed in the past and has
> been
> > rejected.
>
> > AJ: - improvements: changes might not make everyone better off, but we
> > don't want changes to screw anyone over either -- pareto
> > improvements in economics, "first, do no harm", etc. (if we get this
> > right, there's no need to make compromises and bundle multiple
> > flawed proposals so that everyone's an equal mix of happy and
> > miserable)
>
> I don't think your conclusion above matches my opinion, for what it's
> worth.
>
> If you've got two features, A and B, where the game theory is:
>
> If A happens, I'm +100, You're -50
> If B happens, I'm -50, You're +100
>
> then even though A+B is +50, +50, then I do think the answer should
> generally be "think harder and come up with better proposals" rather than
> "implement A+B as a bundle that makes us both +50".
>
> _But_ if the two features are more like:
>
> If C happens, I'm +100, You're +/- 0
> If D happens, I'm +/- 0, You're +100
>
> then I don't have a problem with bundling them together as a single
> simultaneous activation of both C and D.
>
> Also, you can have situations where things are better together,
> that is:
>
> If E happens, we're both at +100
> If F happens, we're both at +50
> If E+F both happen, we're both at +9000
>
> In general, I think combining proposals when the combination is better
> than the individual proposals were is obviously good; and combining
> related proposals into a single activation can be good if it is easier
> to think about the ideas as a set.
>
> It's only when you'd be rejecting the proposal on its own merits that
> I think combining it with others is a bad idea in principle.
>
> For specific examples, we bundled schnorr, Taproot, MAST, OP_SUCCESSx
> and CHECKSIGADD together because they do have synergies like that; we
> didn't bundle ANYPREVOUT and graftroot despite the potential synergies
> because those features needed substantially more study.
>
> The nulldummy soft-fork (bip 147) was deployed concurrently with
> the segwit soft-fork (bip 141, 143), but I don't think there was any
> particular synergy or need for those things to be combined, it just
> reduced the overhead of two sets of activation signalling to one.
>
> Note that the implementation code for nulldummy had already been merged
> and were applied as relay policy well before activation parameters were
> defined (May 2014 via PR#3843 vs Sep 2016 for PR#8636) let alone becoming
> an active soft fork.
>
> Cheers,
> aj
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 6660 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [bitcoin-dev] On the regularity of soft forks
2021-10-15 0:43 ` micaroni
@ 2021-10-16 11:02 ` Michael Folkson
2021-12-31 3:10 ` Keagan McClelland
0 siblings, 1 reply; 13+ messages in thread
From: Michael Folkson @ 2021-10-16 11:02 UTC (permalink / raw)
To: micaroni, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 6043 bytes --]
> Interesting discussion.Correct me if I'm wrong: but putting too many features together in one shot just can't make things harder to debug in production if something very unexpected happens.It's a basic principle of software engineering.
Soft fork features can (and should) obviously be tested thoroughly on testnet, signet, custom signets, sidechains etc on a standalone basis and a bundled basis. But whether or not it is a basic principle of general software engineering kind of misses the point. Security critical software clearly isn't engineered in the same way as a new social media app. Bugs are easily reverted in a new social media app. A consensus change is extremely hard to revert and probably requires a hard fork, a level of central coordination we generally attempt to avoid and a speed of deployment that we also attempt to avoid. On top of that we aren't just dealing with security critical software. One of the most important objectives is to keep all the nodes on the network in consensus. Introducing a consensus change before we are comfortable there is community consensus for it is a massive effective bug in itself. The network can split in multiple ways e.g. part of the network disagrees on whether to activate the consensus change, part of the network disagrees on how to resist that consensus change, part of the network disagrees on how to activate that consensus change etc
In addition, a social media app can experiment in production whether Feature A works, whether Feature B works or whether Feature A and B work best together. In Bitcoin if we activate consensus Feature A, later decide we want consensus Feature B but find out that by previously activating Feature A we can't have Feature B (it is now unsafe to activate it) or its design now has to be suboptimal because we have to ensure it can safely work in the presence of Feature A we have made a mistake by activating Feature A in the first place. Decentralized security critical consensus changes are an emerging field in itself and really can't be treated like any other software project. This will become universally understood I'm sure over time.
--
Michael Folkson
Email: michaelfolkson at protonmail.com
Keybase: michaelfolkson
PGP: 43ED C999 9F85 1D40 EAF4 9835 92D6 0159 214C FEE3
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Friday, October 15th, 2021 at 1:43 AM, Felipe Micaroni Lalli via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
> Interesting discussion.Correct me if I'm wrong: but putting too many features together in one shot just can't make things harder to debug in production if something very unexpected happens. It's a basic principle of software engineering.
>
> Change. Deploy. Nothing bad happened? Change it a little more. Deployment.
>
> Or:Change, change, change. Deploy. Did something bad happen? What change caused the problem?
>
> On Thu, Oct 14, 2021 at 8:53 PM Anthony Towns via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> On Mon, Oct 11, 2021 at 12:12:58PM -0700, Jeremy via bitcoin-dev wrote:
>>> > ... in this post I will argue against frequent soft forks with a single or
>>> minimal
>>> > set of features and instead argue for infrequent soft forks with batches
>>> > of features.
>>> I think this type of development has been discussed in the past and has been
>>> rejected.
>>
>>> AJ: - improvements: changes might not make everyone better off, but we
>>> don't want changes to screw anyone over either -- pareto
>>> improvements in economics, "first, do no harm", etc. (if we get this
>>> right, there's no need to make compromises and bundle multiple
>>> flawed proposals so that everyone's an equal mix of happy and
>>> miserable)
>>
>> I don't think your conclusion above matches my opinion, for what it's
>> worth.
>>
>> If you've got two features, A and B, where the game theory is:
>>
>> If A happens, I'm +100, You're -50
>> If B happens, I'm -50, You're +100
>>
>> then even though A+B is +50, +50, then I do think the answer should
>> generally be "think harder and come up with better proposals" rather than
>> "implement A+B as a bundle that makes us both +50".
>>
>> _But_ if the two features are more like:
>>
>> If C happens, I'm +100, You're +/- 0
>> If D happens, I'm +/- 0, You're +100
>>
>> then I don't have a problem with bundling them together as a single
>> simultaneous activation of both C and D.
>>
>> Also, you can have situations where things are better together,
>> that is:
>>
>> If E happens, we're both at +100
>> If F happens, we're both at +50
>> If E+F both happen, we're both at +9000
>>
>> In general, I think combining proposals when the combination is better
>> than the individual proposals were is obviously good; and combining
>> related proposals into a single activation can be good if it is easier
>> to think about the ideas as a set.
>>
>> It's only when you'd be rejecting the proposal on its own merits that
>> I think combining it with others is a bad idea in principle.
>>
>> For specific examples, we bundled schnorr, Taproot, MAST, OP_SUCCESSx
>> and CHECKSIGADD together because they do have synergies like that; we
>> didn't bundle ANYPREVOUT and graftroot despite the potential synergies
>> because those features needed substantially more study.
>>
>> The nulldummy soft-fork (bip 147) was deployed concurrently with
>> the segwit soft-fork (bip 141, 143), but I don't think there was any
>> particular synergy or need for those things to be combined, it just
>> reduced the overhead of two sets of activation signalling to one.
>>
>> Note that the implementation code for nulldummy had already been merged
>> and were applied as relay policy well before activation parameters were
>> defined (May 2014 via PR#3843 vs Sep 2016 for PR#8636) let alone becoming
>> an active soft fork.
>>
>> Cheers,
>> aj
>>
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[-- Attachment #2: Type: text/html, Size: 14223 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [bitcoin-dev] On the regularity of soft forks
2021-10-16 11:02 ` Michael Folkson
@ 2021-12-31 3:10 ` Keagan McClelland
0 siblings, 0 replies; 13+ messages in thread
From: Keagan McClelland @ 2021-12-31 3:10 UTC (permalink / raw)
To: Michael Folkson, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 12313 bytes --]
> But whether or not it is a basic principle of general software
engineering kind of misses the point. Security critical software clearly
isn't engineered in the same way as a new social media app. Bugs are easily
reverted in a new social media app.On top of that we aren't just dealing
with security critical software. One of the most important objectives is to
keep all the nodes on the network in consensus. Introducing a consensus
change before we are comfortable there is community consensus for it is a
massive effective bug in itself. The network can split in multiple ways
e.g. part of the network disagrees on whether to activate the consensus
change, part of the network disagrees on how to resist that consensus
change, part of the network disagrees on how to activate that consensus
change etc
> A consensus change is extremely hard to revert and probably requires a
hard fork, a level of central coordination we generally attempt to avoid
and a speed of deployment that we also attempt to avoid.
This seems to assert the idea that soft forks are all the same: they are
not. For instance a soft fork, lowering the block subsidy is completely
different than changing the semantics of an OP_NOP to have semantics that
may reject a subset of the witnesses that attest to the transactions
permissibility. As a result, reversion means two entirely different things
in these contexts. While a strict reversion of both soft forks is by
definition a hard fork, the requirement of reversion as a result of
undesired behavior is not the same. In the case of opcodes, there is almost
never a requirement to revert it. If you don't like the way the opcodes
behave, then you just don't use them. If you don't like the reduction of
the block subsidy, well that's a much bigger problem.
I make this point to elucidate the idea that we cannot treat SoftForks™ as
a single monolithic idea. Perhaps we need to come up with better
terminology to be specific about what each fork actually is. The soft vs.
hard distinction is a critical one but it is not enough and treating soft
forks that are noninvasive such as OP_NOP tightenings. This has been
proposed before [1], and while I do not necessarily think the terms cited
are necessarily complete, they admit the low resolution of our current
terminology.
> Soft fork features can (and should) obviously be tested thoroughly on
testnet, signet, custom signets, sidechains etc on a standalone basis and a
bundled basis.
I vehemently disagree that any consensus changes should be bundled,
especially when it comes to activation parameters. When we start to bundle
things, we amplify the community resources needed to do review, not reduce
them. I suspect your opinion here is largely informed by your frustration
with the Taproot Activation procedure that you underwent earlier this year.
This is understandable. However, let me present the alternative case. If we
start to bundle features, the review of the features gets significantly
harder. As the Bitcoin project scales, the ability of any one developer to
understand the entire codebase declines. Bundling changes reduces the
number of people who are qualified to review a particular proposal, and
even worse, intimidates people who may be willing and able to review
logically distinct portions of the proposal, resulting in lower amounts of
review overall. This will likely have the opposite effect of what you seem
to desire. BIP8 and BIP9 give us the ability to have multiple independent
soft forks in flight at once. Choosing to bundle them instead makes little
sense when we do not have to. Bundling them will inevitably degenerate into
political horse trading and everyone will be worse off for it.
> part of the network disagrees on whether to activate the consensus
change, part of the network disagrees on how to resist that consensus
change, part of the network disagrees on how to activate that consensus
change etc
Disagreements, and by extension, forks are a part of Bitcoin. What is
important is that they are well defined and clean. This is the reason why
the mandatory signaling period exists in BIP8/9, so that clients that
intend to reject the soft fork change have a very easy means of doing so in
a clean break where consensus is clearly divergent. In accordance with
this, consensus changes should be sequenced so that people can decide which
sides of the forks they want to follow and that the economic reality can
reorganize around that. If choose to bundle them, you have one of two
outcomes: either consensus atomizes into a mist where people have different
ideas of which subsets of a soft fork bundle they want to adopt, or what
likely comes after is a reconvergence on the old client with none of the
soft fork rules in place. This will lead to significantly more confusion as
well given that with sufficient miner consensus some of the rules may stick
anyway even if the rest of the user base reconverges on the old client.
It is quite likely less damaging to consensus to have frequent but strictly
sequenced soft forks so that if one of the new rules is contentious the
break can happen cleanly. That said, if Core or any other client wishes to
cut a release of the software with the parameters bundled into a single
release, that is a significantly more palatable state of affairs, as you
can still pipeline signaling and activation. However, the protocol itself
adopting a tendency to activate unrelated proposals in bundles is a recipe
for disaster.
Respectfully,
Keagan
[1] https://www.truthcoin.info/blog/protocol-upgrade-terminology
On Sat, Oct 16, 2021 at 12:57 PM Michael Folkson via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> > Interesting discussion. Correct me if I'm wrong: but putting too many
> features together in one shot just can't make things harder to debug in
> production if something very unexpected happens. It's a basic principle
> of software engineering.
>
> Soft fork features can (and should) obviously be tested thoroughly on
> testnet, signet, custom signets, sidechains etc on a standalone basis and a
> bundled basis. But whether or not it is a basic principle of general
> software engineering kind of misses the point. Security critical software
> clearly isn't engineered in the same way as a new social media app. Bugs
> are easily reverted in a new social media app. A consensus change is
> extremely hard to revert and probably requires a hard fork, a level of
> central coordination we generally attempt to avoid and a speed of
> deployment that we also attempt to avoid. On top of that we aren't just
> dealing with security critical software. One of the most important
> objectives is to keep all the nodes on the network in consensus.
> Introducing a consensus change before we are comfortable there is community
> consensus for it is a massive effective bug in itself. The network can
> split in multiple ways e.g. part of the network disagrees on whether to
> activate the consensus change, part of the network disagrees on how to
> resist that consensus change, part of the network disagrees on how to
> activate that consensus change etc
>
> In addition, a social media app can experiment in production whether
> Feature A works, whether Feature B works or whether Feature A and B work
> best together. In Bitcoin if we activate consensus Feature A, later decide
> we want consensus Feature B but find out that by previously activating
> Feature A we can't have Feature B (it is now unsafe to activate it) or its
> design now has to be suboptimal because we have to ensure it can safely
> work in the presence of Feature A we have made a mistake by activating
> Feature A in the first place. Decentralized security critical consensus
> changes are an emerging field in itself and really can't be treated like
> any other software project. This will become universally understood I'm
> sure over time.
>
> --Michael Folkson
> Email: michaelfolkson at protonmail.com
> Keybase: michaelfolkson
> PGP: 43ED C999 9F85 1D40 EAF4 9835 92D6 0159 214C FEE3
>
>
> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
> On Friday, October 15th, 2021 at 1:43 AM, Felipe Micaroni Lalli via
> bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> Interesting discussion. Correct me if I'm wrong: but putting too many
> features together in one shot just can't make things harder to debug in
> production if something very unexpected happens. It's a basic principle
> of software engineering.
>
> Change. Deploy. Nothing bad happened? Change it a little more. Deployment.
> Or: Change, change, change. Deploy. Did something bad happen? What change
> caused the problem?
>
> On Thu, Oct 14, 2021 at 8:53 PM Anthony Towns via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> On Mon, Oct 11, 2021 at 12:12:58PM -0700, Jeremy via bitcoin-dev wrote:
>> > > ... in this post I will argue against frequent soft forks with a
>> single or
>> > minimal
>> > > set of features and instead argue for infrequent soft forks with
>> batches
>> > > of features.
>> > I think this type of development has been discussed in the past and has
>> been
>> > rejected.
>>
>> > AJ: - improvements: changes might not make everyone better off, but we
>> > don't want changes to screw anyone over either -- pareto
>> > improvements in economics, "first, do no harm", etc. (if we get this
>> > right, there's no need to make compromises and bundle multiple
>> > flawed proposals so that everyone's an equal mix of happy and
>> > miserable)
>>
>> I don't think your conclusion above matches my opinion, for what it's
>> worth.
>>
>> If you've got two features, A and B, where the game theory is:
>>
>> If A happens, I'm +100, You're -50
>> If B happens, I'm -50, You're +100
>>
>> then even though A+B is +50, +50, then I do think the answer should
>> generally be "think harder and come up with better proposals" rather than
>> "implement A+B as a bundle that makes us both +50".
>>
>> _But_ if the two features are more like:
>>
>> If C happens, I'm +100, You're +/- 0
>> If D happens, I'm +/- 0, You're +100
>>
>> then I don't have a problem with bundling them together as a single
>> simultaneous activation of both C and D.
>>
>> Also, you can have situations where things are better together,
>> that is:
>>
>> If E happens, we're both at +100
>> If F happens, we're both at +50
>> If E+F both happen, we're both at +9000
>>
>> In general, I think combining proposals when the combination is better
>> than the individual proposals were is obviously good; and combining
>> related proposals into a single activation can be good if it is easier
>> to think about the ideas as a set.
>>
>> It's only when you'd be rejecting the proposal on its own merits that
>> I think combining it with others is a bad idea in principle.
>>
>> For specific examples, we bundled schnorr, Taproot, MAST, OP_SUCCESSx
>> and CHECKSIGADD together because they do have synergies like that; we
>> didn't bundle ANYPREVOUT and graftroot despite the potential synergies
>> because those features needed substantially more study.
>>
>> The nulldummy soft-fork (bip 147) was deployed concurrently with
>> the segwit soft-fork (bip 141, 143), but I don't think there was any
>> particular synergy or need for those things to be combined, it just
>> reduced the overhead of two sets of activation signalling to one.
>>
>> Note that the implementation code for nulldummy had already been merged
>> and were applied as relay policy well before activation parameters were
>> defined (May 2014 via PR#3843 vs Sep 2016 for PR#8636) let alone becoming
>> an active soft fork.
>>
>> Cheers,
>> aj
>>
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 24566 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [bitcoin-dev] On the regularity of soft forks
2022-01-18 17:22 Prayank
@ 2022-01-19 2:26 ` Billy Tetrud
0 siblings, 0 replies; 13+ messages in thread
From: Billy Tetrud @ 2022-01-19 2:26 UTC (permalink / raw)
To: Prayank; +Cc: Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 3777 bytes --]
> That day is nowhere near IMO and maybe we won't see it in my lifetime.
I think there is a reasonable argument to be made that maybe bitcoin needs
to move faster now than it should in the future, and the cost of having the
community remain vigilant against harmful changes is worth the extra speed.
The question then becomes, does doing soft forks more often make things go
faster? Its not clear to me that the answer is yes.
> This is not possible in a decentralized network like Bitcoin and makes no
sense.
Why do you think that its not possible? I completely disagree. The bitcoin
community has already come up with cultural norms like this, like the idea
of doing soft forks instead of hardforks wherever possible. Its impossible
to prevent others from doing otherwise, but its completely possible and
desirable for the bitcoin community to adopt standards that we attempt to
adhere to.
> More changes bundled require more review and still more probability to
have bugs.
I already addressed this in my previous email. Why do you think there is
more to review in a soft fork with two bundled changes than in two separate
concurrent soft-fork activations using BIP8 or BIP9? Both require both
changes to be in the software and both require testing to ensure that the
changes interact appropriately. The difference is that in the second case,
you have to test all combinations of which order the proposals activate in.
And let's consider the easiest case of change A, then soft fork 1, then
change B, and soft fork 2. Change A needs to be tested all on its own, and
change B when it comes along also then needs to be tested on code that
already has change A. If the changes are bundled, the same procedure needs
to happen. You just avoid having to do soft fork 1.
> BIP 8 with LOT=TRUE was a better activation mechanism
I completely disagree, but that's not relevant to this topic.
On Tue, Jan 18, 2022 at 11:22 AM Prayank <prayank@tutanota.de> wrote:
> > We should strive to one day get to a point where the bitcoin consensus
> isn't updating at all.
>
> That day is nowhere near IMO and maybe we won't see it in my lifetime.
>
> > Perhaps we should come to a consensus as a consensus as a community what
> the minimum time between soft forks should be, and just as importantly,
> what the minimum time between finalized consensus-change implementation and
> when we decide community consensus has been achieved.
>
> This is not possible in a decentralized network like Bitcoin and makes no
> sense. Soft forks can/should be done as and when required. This does not
> mean we do them often but if a change makes sense, looks ready, got enough
> consensus, reviewed properly etc. then timing doesn't really matter in
> every case.
>
> > Activating multiple consensus changes in a bundle is far safer than
> having multiple separate in-flight soft forks at once.
>
> This is not true. More changes bundled require more review and still more
> probability to have bugs. Security is always about keeping things simple.
>
> > One solution is that we could be a lot more direct about how decisions
> are made. There's been a lot of rhetoric around UASF and how the economic
> majority is really who's running the show.
>
> BIP 8 with LOT=TRUE was a better activation mechanism option in Taproot
> but some influential developers wrote its misleading, unsafe etc. on social
> media so you can call me negative at this moment however I have realized
> the truth is really sad and we can't blindly follow some people. There are
> lot of people who will tell you bad things about UASF and how speedy trial
> is the best thing Bitcoin has ever experienced.
>
> Michael Folkson also had some opinion in activation mechanism IIRC,
>
>
> --
> Prayank
>
> A3B1 E430 2298 178F
>
[-- Attachment #2: Type: text/html, Size: 4824 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [bitcoin-dev] On the regularity of soft forks
@ 2022-01-18 17:22 Prayank
2022-01-19 2:26 ` Billy Tetrud
0 siblings, 1 reply; 13+ messages in thread
From: Prayank @ 2022-01-18 17:22 UTC (permalink / raw)
To: Billy Tetrud; +Cc: Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 1742 bytes --]
> We should strive to one day get to a point where the bitcoin consensus isn't updating at all.
That day is nowhere near IMO and maybe we won't see it in my lifetime.
> Perhaps we should come to a consensus as a consensus as a community what the minimum time between soft forks should be, and just as importantly, what the minimum time between finalized consensus-change implementation and when we decide community consensus has been achieved.
This is not possible in a decentralized network like Bitcoin and makes no sense. Soft forks can/should be done as and when required. This does not mean we do them often but if a change makes sense, looks ready, got enough consensus, reviewed properly etc. then timing doesn't really matter in every case.
> Activating multiple consensus changes in a bundle is far safer than having multiple separate in-flight soft forks at once.
This is not true. More changes bundled require more review and still more probability to have bugs. Security is always about keeping things simple.
> One solution is that we could be a lot more direct about how decisions are made. There's been a lot of rhetoric around UASF and how the economic majority is really who's running the show.
BIP 8 with LOT=TRUE was a better activation mechanism option in Taproot but some influential developers wrote its misleading, unsafe etc. on social media so you can call me negative at this moment however I have realized the truth is really sad and we can't blindly follow some people. There are lot of people who will tell you bad things about UASF and how speedy trial is the best thing Bitcoin has ever experienced.
Michael Folkson also had some opinion in activation mechanism IIRC,
--
Prayank
A3B1 E430 2298 178F
[-- Attachment #2: Type: text/html, Size: 2406 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [bitcoin-dev] On the regularity of soft forks
2022-01-01 15:45 vjudeu
@ 2022-01-18 16:00 ` Billy Tetrud
0 siblings, 0 replies; 13+ messages in thread
From: Billy Tetrud @ 2022-01-18 16:00 UTC (permalink / raw)
To: vjudeu, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 20148 bytes --]
I agree with you Michael, there is a risk to soft forks and we shouldn't do
them too often. We should do them as infrequently as practical. We should
strive to one day get to a point where the bitcoin consensus isn't updating
at all.
Perhaps we should come to a consensus as a consensus as a community what
the minimum time between soft forks should be, and just as importantly,
what the minimum time between finalized consensus-change implementation and
when we decide community consensus has been achieved.
How long do you think these minimums should be? I'm curious to know
everyone's answer to this. I would think of these like I think about
changes to how I think national law should be changed: slowly and
carefully. There should be sufficient time for everyone to have a chance in
their busy lives to take the time to look at it, if they care, so they can
raise objections. I think the minimum time between a soft fork
implementation finalization and determining consensus should be maybe a
year. And the minimum time between soft forks should probably be something
like 5 years. This would mean that people only have to worry about paying
attention to what might happen with bitcoin consensus once every 5 years,
and would get a year-long window to do it. And if there isn't sufficient
consensus, or people bring up objections at the last minute, that should be
acceptable and it should further delay the deployment.
I think a lot of folks on here are rightly concerned about compromise
bundles where multiple mediocre proposals are put together
to basically incentivize some people to accept something they don't want
in order to receive something they do want (eg what Jeremy quoted Matt
Corallo about). But I don't think that's what Michael was suggesting at
all. That kind of compromise happens in the *decision making process*. My
understanding of what Michael was saying is that releasing a soft fork
should *not* be within the decision making process at all. The decision
making process should have already happened.
If you have consensus changes A and B, Michael was saying that each
consensus change proposal should go through a community vetting process
that determines that there is widespread supermajority support for it
*before* it is even merged into the code (ie master, or some equivalent
this-will-be-deployed branch). It should have a final implementation that
has been tested at all levels *before* its merged to master. And only then
should it potentially be bundled. After all testing has already been done,
after sufficient consensus has already been determined.
@Keagan
> When we start to bundle things, we amplify the community resources needed
to do review, not reduce them.
I think my above 2 paragraphs address this. I agree we don't want to review
these proposals together, they should be reviewed separately. And I don't
think Michael was suggesting otherwise.
> the protocol itself adopting a tendency to activate unrelated proposals
in bundles is a recipe for disaster.
Activating multiple consensus changes in a bundle is far safer than having
multiple separate in-flight soft forks at once. With multiple in-flight
soft forks, you have many combinations of what might happen (and therefore
what needs to be tested beforehand). Just 3 in-flight soft forks means 9
cases: nine orders of what might happen. All those combinations must be
exhaustively tested as all consensus changes must be. This is far more
work, more complicated, and more error prone than bundling them together in
one soft fork.
@Prayank
> However I am sure there are lot of people who still think miners vote
during signaling. ... I could not think of any solution to solve this
problem.
One solution is that we could be a lot more direct about how decisions are
made. There's been a lot of rhetoric around UASF and how the economic
majority is really who's running the show. If that's the case, why not make
that explicitly? Why not actually ask users to sign a petition saying they
support a particular consensus change? This could be done with actual
signatures by keys connected to UTXOs so we can see the economic weight of
the petition. We would probably need to have a new address format to
prevent problems related to public key exposure (eg by having addresses
containing two public keys: `hash(hash(spendingkey)+hash(votingkey))` where
you can expose the voting key without exposing your spending key). Perhaps
this could be another tapleaf.
Doing this could make it very clear how much of the bitcoin world supports
a particular change without needing to put anything extra on chain. This
clarity would also help the actual miner activation of the software in
cases where miners might have incentives not to activate. If it were clear
that an overwhelming supermajority wants it activated, miners would be less
likely to play games that play off uncertainty. It would also dispel the
idea that miners or developers decide how bitcoin changes.
On Sat, Jan 1, 2022 at 10:00 AM vjudeu via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> > If you don't like the reduction of the block subsidy, well that's a much
> bigger problem.
>
> It is reversible, because you can also increase the block subsidy by using
> another kind of soft-fork. For example, you can create spendable outputs
> with zero satoshis. In this way, old nodes will accept that silently, but
> new nodes can check something more, because you can specify somewhere else,
> what is the "real" amount. Finally, if all nodes will upgrade, you will end
> up in a network, where all transactions spend zero satoshi inputs, create
> zero satoshi outputs and have zero fee. Old nodes would accept all of that,
> but new nodes would really see, what is going on, and they will check that
> all rules are met, and the new subsidy is for example increased x1000 (that
> could lead to the same situation as moving from satoshis to millisatoshis
> with some hard-fork, but doing that kind of change with a soft-fork is
> safer).
>
> On 2021-12-31 10:35:06 user Keagan McClelland via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> > But whether or not it is a basic principle of general software
> engineering kind of misses the point. Security critical software clearly
> isn't engineered in the same way as a new social media app. Bugs are easily
> reverted in a new social media app.On top of that we aren't just dealing
> with security critical software. One of the most important objectives is to
> keep all the nodes on the network in consensus. Introducing a consensus
> change before we are comfortable there is community consensus for it is a
> massive effective bug in itself. The network can split in multiple ways
> e.g. part of the network disagrees on whether to activate the consensus
> change, part of the network disagrees on how to resist that consensus
> change, part of the network disagrees on how to activate that consensus
> change etc
>
> > A consensus change is extremely hard to revert and probably requires a
> hard fork, a level of central coordination we generally attempt to avoid
> and a speed of deployment that we also attempt to avoid.
>
> This seems to assert the idea that soft forks are all the same: they are
> not. For instance a soft fork, lowering the block subsidy is completely
> different than changing the semantics of an OP_NOP to have semantics that
> may reject a subset of the witnesses that attest to the transactions
> permissibility. As a result, reversion means two entirely different things
> in these contexts. While a strict reversion of both soft forks is by
> definition a hard fork, the requirement of reversion as a result of
> undesired behavior is not the same. In the case of opcodes, there is almost
> never a requirement to revert it. If you don't like the way the opcodes
> behave, then you just don't use them. If you don't like the reduction of
> the block subsidy, well that's a much bigger problem.
>
> I make this point to elucidate the idea that we cannot treat SoftForks™ as
> a single monolithic idea. Perhaps we need to come up with better
> terminology to be specific about what each fork actually is. The soft vs.
> hard distinction is a critical one but it is not enough and treating soft
> forks that are noninvasive such as OP_NOP tightenings. This has been
> proposed before [1], and while I do not necessarily think the terms cited
> are necessarily complete, they admit the low resolution of our current
> terminology.
>
> > Soft fork features can (and should) obviously be tested thoroughly on
> testnet, signet, custom signets, sidechains etc on a standalone basis and a
> bundled basis.
>
> I vehemently disagree that any consensus changes should be bundled,
> especially when it comes to activation parameters. When we start to bundle
> things, we amplify the community resources needed to do review, not reduce
> them. I suspect your opinion here is largely informed by your frustration
> with the Taproot Activation procedure that you underwent earlier this year.
> This is understandable. However, let me present the alternative case. If we
> start to bundle features, the review of the features gets significantly
> harder. As the Bitcoin project scales, the ability of any one developer to
> understand the entire codebase declines. Bundling changes reduces the
> number of people who are qualified to review a particular proposal, and
> even worse, intimidates people who may be willing and able to review
> logically distinct portions of the proposal, resulting in lower amounts of
> review overall. This will likely have the opposite effect of what you seem
> to desire. BIP8 and BIP9 give us the ability to have multiple independent
> soft forks in flight at once. Choosing to bundle them instead makes little
> sense when we do not have to. Bundling them will inevitably degenerate into
> political horse trading and everyone will be worse off for it.
>
> > part of the network disagrees on whether to activate the consensus
> change, part of the network disagrees on how to resist that consensus
> change, part of the network disagrees on how to activate that consensus
> change etc
>
> Disagreements, and by extension, forks are a part of Bitcoin. What is
> important is that they are well defined and clean. This is the reason why
> the mandatory signaling period exists in BIP8/9, so that clients that
> intend to reject the soft fork change have a very easy means of doing so in
> a clean break where consensus is clearly divergent. In accordance with
> this, consensus changes should be sequenced so that people can decide which
> sides of the forks they want to follow and that the economic reality can
> reorganize around that. If choose to bundle them, you have one of two
> outcomes: either consensus atomizes into a mist where people have different
> ideas of which subsets of a soft fork bundle they want to adopt, or what
> likely comes after is a reconvergence on the old client with none of the
> soft fork rules in place. This will lead to significantly more confusion as
> well given that with sufficient miner consensus some of the rules may stick
> anyway even if the rest of the user base reconverges on the old client.
>
> It is quite likely less damaging to consensus to have frequent but
> strictly sequenced soft forks so that if one of the new rules is
> contentious the break can happen cleanly. That said, if Core or any other
> client wishes to cut a release of the software with the parameters bundled
> into a single release, that is a significantly more palatable state of
> affairs, as you can still pipeline signaling and activation. However, the
> protocol itself adopting a tendency to activate unrelated proposals in
> bundles is a recipe for disaster.
>
>
> Respectfully,
> Keagan
>
>
> [1] https://www.truthcoin.info/blog/protocol-upgrade-terminology
>
> On Sat, Oct 16, 2021 at 12:57 PM Michael Folkson via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org
> <http://../NowaWiadomosc/Do/QlIkBFQ6QUFhIVRZX192dnQBeCtCchE6GhA5LFpLCUc7EVZQVl9dQRIXXR8NCBMbCwIGChJXQFxcXEgcFh8UVVVDEyBdVkE9JVRdEwFhYXVlblhVIkosEAszLR5BQVV7U0MID0BAQUgIGh0RHgAMGAMXBQJfW1sdXRQUQUoDQlAiBFY8>>
> wrote:
>
>> > Interesting discussion. Correct me if I'm wrong: but putting too many
>> features together in one shot just can't make things harder to debug in
>> production if something very unexpected happens. It's a basic principle
>> of software engineering.
>>
>> Soft fork features can (and should) obviously be tested thoroughly on
>> testnet, signet, custom signets, sidechains etc on a standalone basis and a
>> bundled basis. But whether or not it is a basic principle of general
>> software engineering kind of misses the point. Security critical software
>> clearly isn't engineered in the same way as a new social media app. Bugs
>> are easily reverted in a new social media app. A consensus change is
>> extremely hard to revert and probably requires a hard fork, a level of
>> central coordination we generally attempt to avoid and a speed of
>> deployment that we also attempt to avoid. On top of that we aren't just
>> dealing with security critical software. One of the most important
>> objectives is to keep all the nodes on the network in consensus.
>> Introducing a consensus change before we are comfortable there is community
>> consensus for it is a massive effective bug in itself. The network can
>> split in multiple ways e.g. part of the network disagrees on whether to
>> activate the consensus change, part of the network disagrees on how to
>> resist that consensus change, part of the network disagrees on how to
>> activate that consensus change etc
>>
>> In addition, a social media app can experiment in production whether
>> Feature A works, whether Feature B works or whether Feature A and B work
>> best together. In Bitcoin if we activate consensus Feature A, later decide
>> we want consensus Feature B but find out that by previously activating
>> Feature A we can't have Feature B (it is now unsafe to activate it) or its
>> design now has to be suboptimal because we have to ensure it can safely
>> work in the presence of Feature A we have made a mistake by activating
>> Feature A in the first place. Decentralized security critical consensus
>> changes are an emerging field in itself and really can't be treated like
>> any other software project. This will become universally understood I'm
>> sure over time.
>>
>>
>>
>> --Michael Folkson
>> Email: michaelfolkson at protonmail.com
>> Keybase: michaelfolkson
>> PGP: 43ED C999 9F85 1D40 EAF4 9835 92D6 0159 214C FEE3
>>
>>
>> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
>> On Friday, October 15th, 2021 at 1:43 AM, Felipe Micaroni Lalli via
>> bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org
>> <http://../NowaWiadomosc/Do/QlIkBFQ6QUFhIVRZX192dnQBeCtCchE6GhA5LFpLCUc7EVZQVl9dQRIXXR8NCBMbCwIGChJXQFxcXEgcFh8UVVVDEyBdVkE9JVRdEwFhYXVlblhVIkosEAszLR5BQVV7U0MID0BAQUgIGh0RHgAMGAMXBQJfW1sdXRQUQUoDQlAiBFY8>>
>> wrote:
>>
>> Interesting discussion. Correct me if I'm wrong: but putting too many
>> features together in one shot just can't make things harder to debug in
>> production if something very unexpected happens. It's a basic principle
>> of software engineering.
>>
>> Change. Deploy. Nothing bad happened? Change it a little more. Deployment.
>> Or: Change, change, change. Deploy. Did something bad happen? What change
>> caused the problem?
>>
>> On Thu, Oct 14, 2021 at 8:53 PM Anthony Towns via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org
>> <http://../NowaWiadomosc/Do/QlIkBFQ6QUFhIVRZX192dnQBeCtCchE6GhA5LFpLCUc7EVZQVl9dQRIXXR8NCBMbCwIGChJXQFxcXEgcFh8UVVVDEyBdVkE9JVRdEwFhYXVlblhVIkosEAszLR5BQVV7U0MID0BAQUgIGh0RHgAMGAMXBQJfW1sdXRQUQUoDQlAiBFY8>>
>> wrote:
>>
>>> On Mon, Oct 11, 2021 at 12:12:58PM -0700, Jeremy via bitcoin-dev wrote:
>>> > > ... in this post I will argue against frequent soft forks with a
>>> single or
>>> > minimal
>>> > > set of features and instead argue for infrequent soft forks with
>>> batches
>>> > > of features.
>>> > I think this type of development has been discussed in the past and
>>> has been
>>> > rejected.
>>>
>>> > AJ: - improvements: changes might not make everyone better off, but we
>>> > don't want changes to screw anyone over either -- pareto
>>> > improvements in economics, "first, do no harm", etc. (if we get this
>>> > right, there's no need to make compromises and bundle multiple
>>> > flawed proposals so that everyone's an equal mix of happy and
>>> > miserable)
>>>
>>> I don't think your conclusion above matches my opinion, for what it's
>>> worth.
>>>
>>> If you've got two features, A and B, where the game theory is:
>>>
>>> If A happens, I'm +100, You're -50
>>> If B happens, I'm -50, You're +100
>>>
>>> then even though A+B is +50, +50, then I do think the answer should
>>> generally be "think harder and come up with better proposals" rather than
>>> "implement A+B as a bundle that makes us both +50".
>>>
>>> _But_ if the two features are more like:
>>>
>>> If C happens, I'm +100, You're +/- 0
>>> If D happens, I'm +/- 0, You're +100
>>>
>>> then I don't have a problem with bundling them together as a single
>>> simultaneous activation of both C and D.
>>>
>>> Also, you can have situations where things are better together,
>>> that is:
>>>
>>> If E happens, we're both at +100
>>> If F happens, we're both at +50
>>> If E+F both happen, we're both at +9000
>>>
>>> In general, I think combining proposals when the combination is better
>>> than the individual proposals were is obviously good; and combining
>>> related proposals into a single activation can be good if it is easier
>>> to think about the ideas as a set.
>>>
>>> It's only when you'd be rejecting the proposal on its own merits that
>>> I think combining it with others is a bad idea in principle.
>>>
>>> For specific examples, we bundled schnorr, Taproot, MAST, OP_SUCCESSx
>>> and CHECKSIGADD together because they do have synergies like that; we
>>> didn't bundle ANYPREVOUT and graftroot despite the potential synergies
>>> because those features needed substantially more study.
>>>
>>> The nulldummy soft-fork (bip 147) was deployed concurrently with
>>> the segwit soft-fork (bip 141, 143), but I don't think there was any
>>> particular synergy or need for those things to be combined, it just
>>> reduced the overhead of two sets of activation signalling to one.
>>>
>>> Note that the implementation code for nulldummy had already been merged
>>> and were applied as relay policy well before activation parameters were
>>> defined (May 2014 via PR#3843 vs Sep 2016 for PR#8636) let alone becoming
>>> an active soft fork.
>>>
>>> Cheers,
>>> aj
>>>
>>> _______________________________________________
>>> bitcoin-dev mailing list
>>> bitcoin-dev@lists.linuxfoundation.org
>>> <http://../NowaWiadomosc/Do/QlIkBFQ6QUFhIVRZX192dnQBeCtCchE6GhA5LFpLCUc7EVZQVl9dQRIXXR8NCBMbCwIGChJXQFxcXEgcFh8UVVVDEyBdVkE9JVRdEwFhYXVlblhVIkosEAszLR5BQVV7U0MID0BAQUgIGh0RHgAMGAMXBQJfW1sdXRQUQUoDQlAiBFY8>
>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> <http://../NowaWiadomosc/Do/QlIkBFQ6QUFhIVRZX192dnQBeCtCchE6GhA5LFpLCUc7EVZQVl9dQRIXXR8NCBMbCwIGChJXQFxcXEgcFh8UVVVDEyBdVkE9JVRdEwFhYXVlblhVIkosEAszLR5BQVV7U0MID0BAQUgIGh0RHgAMGAMXBQJfW1sdXRQUQUoDQlAiBFY8>
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 32438 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [bitcoin-dev] On the regularity of soft forks
@ 2022-01-01 15:45 vjudeu
2022-01-18 16:00 ` Billy Tetrud
0 siblings, 1 reply; 13+ messages in thread
From: vjudeu @ 2022-01-01 15:45 UTC (permalink / raw)
To: Keagan McClelland, Michael Folkson, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 12917 bytes --]
> If you don't like the reduction of the block subsidy, well that's a much bigger problem.
It is reversible, because you can also increase the block subsidy by using another kind of soft-fork. For example, you can create spendable outputs with zero satoshis. In this way, old nodes will accept that silently, but new nodes can check something more, because you can specify somewhere else, what is the "real" amount. Finally, if all nodes will upgrade, you will end up in a network, where all transactions spend zero satoshi inputs, create zero satoshi outputs and have zero fee. Old nodes would accept all of that, but new nodes would really see, what is going on, and they will check that all rules are met, and the new subsidy is for example increased x1000 (that could lead to the same situation as moving from satoshis to millisatoshis with some hard-fork, but doing that kind of change with a soft-fork is safer).
On 2021-12-31 10:35:06 user Keagan McClelland via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
> But whether or not it is a basic principle of general software engineering kind of misses the point. Security critical software clearly isn't engineered in the same way as a new social media app. Bugs are easily reverted in a new social media app.On top of that we aren't just dealing with security critical software. One of the most important objectives is to keep all the nodes on the network in consensus. Introducing a consensus change before we are comfortable there is community consensus for it is a massive effective bug in itself. The network can split in multiple ways e.g. part of the network disagrees on whether to activate the consensus change, part of the network disagrees on how to resist that consensus change, part of the network disagrees on how to activate that consensus change etc
> A consensus change is extremely hard to revert and probably requires a hard fork, a level of central coordination we generally attempt to avoid and a speed of deployment that we also attempt to avoid.
This seems to assert the idea that soft forks are all the same: they are not. For instance a soft fork, lowering the block subsidy is completely different than changing the semantics of an OP_NOP to have semantics that may reject a subset of the witnesses that attest to the transactions permissibility. As a result, reversion means two entirely different things in these contexts. While a strict reversion of both soft forks is by definition a hard fork, the requirement of reversion as a result of undesired behavior is not the same. In the case of opcodes, there is almost never a requirement to revert it. If you don't like the way the opcodes behave, then you just don't use them. If you don't like the reduction of the block subsidy, well that's a much bigger problem.
I make this point to elucidate the idea that we cannot treat SoftForks™ as a single monolithic idea. Perhaps we need to come up with better terminology to be specific about what each fork actually is. The soft vs. hard distinction is a critical one but it is not enough and treating soft forks that are noninvasive such as OP_NOP tightenings. This has been proposed before [1], and while I do not necessarily think the terms cited are necessarily complete, they admit the low resolution of our current terminology.
> Soft fork features can (and should) obviously be tested thoroughly on testnet, signet, custom signets, sidechains etc on a standalone basis and a bundled basis.
I vehemently disagree that any consensus changes should be bundled, especially when it comes to activation parameters. When we start to bundle things, we amplify the community resources needed to do review, not reduce them. I suspect your opinion here is largely informed by your frustration with the Taproot Activation procedure that you underwent earlier this year. This is understandable. However, let me present the alternative case. If we start to bundle features, the review of the features gets significantly harder. As the Bitcoin project scales, the ability of any one developer to understand the entire codebase declines. Bundling changes reduces the number of people who are qualified to review a particular proposal, and even worse, intimidates people who may be willing and able to review logically distinct portions of the proposal, resulting in lower amounts of review overall. This will likely have the opposite effect of what you seem to desire. BIP8 and BIP9 give us the ability to have multiple independent soft forks in flight at once. Choosing to bundle them instead makes little sense when we do not have to. Bundling them will inevitably degenerate into political horse trading and everyone will be worse off for it.
> part of the network disagrees on whether to activate the consensus change, part of the network disagrees on how to resist that consensus change, part of the network disagrees on how to activate that consensus change etc
Disagreements, and by extension, forks are a part of Bitcoin. What is important is that they are well defined and clean. This is the reason why the mandatory signaling period exists in BIP8/9, so that clients that intend to reject the soft fork change have a very easy means of doing so in a clean break where consensus is clearly divergent. In accordance with this, consensus changes should be sequenced so that people can decide which sides of the forks they want to follow and that the economic reality can reorganize around that. If choose to bundle them, you have one of two outcomes: either consensus atomizes into a mist where people have different ideas of which subsets of a soft fork bundle they want to adopt, or what likely comes after is a reconvergence on the old client with none of the soft fork rules in place. This will lead to significantly more confusion as well given that with sufficient miner consensus some of the rules may stick anyway even if the rest of the user base reconverges on the old client.
It is quite likely less damaging to consensus to have frequent but strictly sequenced soft forks so that if one of the new rules is contentious the break can happen cleanly. That said, if Core or any other client wishes to cut a release of the software with the parameters bundled into a single release, that is a significantly more palatable state of affairs, as you can still pipeline signaling and activation. However, the protocol itself adopting a tendency to activate unrelated proposals in bundles is a recipe for disaster.
Respectfully,
Keagan
[1] https://www.truthcoin.info/blog/protocol-upgrade-terminology
On Sat, Oct 16, 2021 at 12:57 PM Michael Folkson via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
> Interesting discussion. Correct me if I'm wrong: but putting too many features together in one shot just can't make things harder to debug in production if something very unexpected happens. It's a basic principle of software engineering.
Soft fork features can (and should) obviously be tested thoroughly on testnet, signet, custom signets, sidechains etc on a standalone basis and a bundled basis. But whether or not it is a basic principle of general software engineering kind of misses the point. Security critical software clearly isn't engineered in the same way as a new social media app. Bugs are easily reverted in a new social media app. A consensus change is extremely hard to revert and probably requires a hard fork, a level of central coordination we generally attempt to avoid and a speed of deployment that we also attempt to avoid. On top of that we aren't just dealing with security critical software. One of the most important objectives is to keep all the nodes on the network in consensus. Introducing a consensus change before we are comfortable there is community consensus for it is a massive effective bug in itself. The network can split in multiple ways e.g. part of the network disagrees on whether to activate the consensus change, part of the network disagrees on how to resist that consensus change, part of the network disagrees on how to activate that consensus change etc
In addition, a social media app can experiment in production whether Feature A works, whether Feature B works or whether Feature A and B work best together. In Bitcoin if we activate consensus Feature A, later decide we want consensus Feature B but find out that by previously activating Feature A we can't have Feature B (it is now unsafe to activate it) or its design now has to be suboptimal because we have to ensure it can safely work in the presence of Feature A we have made a mistake by activating Feature A in the first place. Decentralized security critical consensus changes are an emerging field in itself and really can't be treated like any other software project. This will become universally understood I'm sure over time.
-- Michael Folkson Email: michaelfolkson at protonmail.com Keybase: michaelfolkson PGP: 43ED C999 9F85 1D40 EAF4 9835 92D6 0159 214C FEE3
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Friday, October 15th, 2021 at 1:43 AM, Felipe Micaroni Lalli via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
Interesting discussion. Correct me if I'm wrong: but putting too many features together in one shot just can't make things harder to debug in production if something very unexpected happens. It's a basic principle of software engineering.
Change. Deploy. Nothing bad happened? Change it a little more. Deployment.
Or: Change, change, change. Deploy. Did something bad happen? What change caused the problem?
On Thu, Oct 14, 2021 at 8:53 PM Anthony Towns via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
On Mon, Oct 11, 2021 at 12:12:58PM -0700, Jeremy via bitcoin-dev wrote:
> > ... in this post I will argue against frequent soft forks with a single or
> minimal
> > set of features and instead argue for infrequent soft forks with batches
> > of features.
> I think this type of development has been discussed in the past and has been
> rejected.
> AJ: - improvements: changes might not make everyone better off, but we
> don't want changes to screw anyone over either -- pareto
> improvements in economics, "first, do no harm", etc. (if we get this
> right, there's no need to make compromises and bundle multiple
> flawed proposals so that everyone's an equal mix of happy and
> miserable)
I don't think your conclusion above matches my opinion, for what it's
worth.
If you've got two features, A and B, where the game theory is:
If A happens, I'm +100, You're -50
If B happens, I'm -50, You're +100
then even though A+B is +50, +50, then I do think the answer should
generally be "think harder and come up with better proposals" rather than
"implement A+B as a bundle that makes us both +50".
_But_ if the two features are more like:
If C happens, I'm +100, You're +/- 0
If D happens, I'm +/- 0, You're +100
then I don't have a problem with bundling them together as a single
simultaneous activation of both C and D.
Also, you can have situations where things are better together,
that is:
If E happens, we're both at +100
If F happens, we're both at +50
If E+F both happen, we're both at +9000
In general, I think combining proposals when the combination is better
than the individual proposals were is obviously good; and combining
related proposals into a single activation can be good if it is easier
to think about the ideas as a set.
It's only when you'd be rejecting the proposal on its own merits that
I think combining it with others is a bad idea in principle.
For specific examples, we bundled schnorr, Taproot, MAST, OP_SUCCESSx
and CHECKSIGADD together because they do have synergies like that; we
didn't bundle ANYPREVOUT and graftroot despite the potential synergies
because those features needed substantially more study.
The nulldummy soft-fork (bip 147) was deployed concurrently with
the segwit soft-fork (bip 141, 143), but I don't think there was any
particular synergy or need for those things to be combined, it just
reduced the overhead of two sets of activation signalling to one.
Note that the implementation code for nulldummy had already been merged
and were applied as relay policy well before activation parameters were
defined (May 2014 via PR#3843 vs Sep 2016 for PR#8636) let alone becoming
an active soft fork.
Cheers,
aj
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[-- Attachment #2: Type: text/html, Size: 24809 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [bitcoin-dev] On the regularity of soft forks
2021-10-11 16:03 Prayank
@ 2021-10-12 19:04 ` Jorge Timón
0 siblings, 0 replies; 13+ messages in thread
From: Jorge Timón @ 2021-10-12 19:04 UTC (permalink / raw)
To: Prayank, Bitcoin Protocol Discussion; +Cc: Michael Folkson
On Tue, Oct 12, 2021 at 5:34 PM Prayank via bitcoin-dev
<bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> Hi Michael,
>
> Agree with almost everything.
>
> > Miner signaling is a tool for signaling readiness. It is not voting for the soft fork or expressing support for the soft fork. There should not be any attempt to facilitate miner signaling until there is sufficient community consensus (the mining community is a subset of the community) on the soft fork.
>
> This is really important which gets ignored. I wish there was a way to solve this problem in a way that it is not misinterpreted by users.
>
> During signalling for taproot, there were lots of users in different communities that believed miners are voting for taproot and we need some percentage of miners to agree before making any changes in Bitcoin. It was not just non-technical users but few mining pools, exchanges etc. also considered miners signaling as some voting process.
>
> Best I could do at that moment was share this link: https://bitcoin.stackexchange.com/questions/97043/is-there-an-active-list-of-bips-currently-open-for-voting/
>
> However I am sure there are lot of people who still think miners vote during signaling. Opinions of few developers on MASF vs UASF also adds more confusion to this thing. I could not think of any solution to solve this problem.
Yes, given most of the arguments given against activation at the end
of the period regardless of mining signaling, it seems sadly it's not
just users but developers too. They seem to believe that miners must
chose for users with bip8(false) because (according to them) with
bip8(true) it is developers who decide for users, and they don't want
to decide for users: they want miners to decide for users.
They don't seem to believe users can actually chose for themselves, sadly.
In the next softfork, sadly, probably the same discussions will be
repeated, the same rational arguments will be ignored and activation
will be once again done, in my opinion, the wrong way and most users
(many more, as we grow in numbers) will remain confused in the same
way and confusing the newcomers they explain bitcoin to.
> --
> Prayank
>
> A3B1 E430 2298 178F
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [bitcoin-dev] On the regularity of soft forks
@ 2021-10-11 16:03 Prayank
2021-10-12 19:04 ` Jorge Timón
0 siblings, 1 reply; 13+ messages in thread
From: Prayank @ 2021-10-11 16:03 UTC (permalink / raw)
To: Michael Folkson; +Cc: Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 1257 bytes --]
Hi Michael,
Agree with almost everything.
> Miner signaling is a tool for signaling readiness. It is not voting for the soft fork or expressing support for the soft fork. There should not be any attempt to facilitate miner signaling until there is sufficient community consensus (the mining community is a subset of the community) on the soft fork.
This is really important which gets ignored. I wish there was a way to solve this problem in a way that it is not misinterpreted by users.
During signalling for taproot, there were lots of users in different communities that believed miners are voting for taproot and we need some percentage of miners to agree before making any changes in Bitcoin. It was not just non-technical users but few mining pools, exchanges etc. also considered miners signaling as some voting process.
Best I could do at that moment was share this link: https://bitcoin.stackexchange.com/questions/97043/is-there-an-active-list-of-bips-currently-open-for-voting/
However I am sure there are lot of people who still think miners vote during signaling. Opinions of few developers on MASF vs UASF also adds more confusion to this thing. I could not think of any solution to solve this problem.
--
Prayank
A3B1 E430 2298 178F
[-- Attachment #2: Type: text/html, Size: 1762 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2022-01-19 2:26 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-11 12:24 [bitcoin-dev] On the regularity of soft forks Michael Folkson
2021-10-11 19:12 ` Jeremy
2021-10-11 19:53 ` ZmnSCPxj
2021-10-14 23:52 ` Anthony Towns
2021-10-15 0:43 ` micaroni
2021-10-16 11:02 ` Michael Folkson
2021-12-31 3:10 ` Keagan McClelland
2021-10-11 16:03 Prayank
2021-10-12 19:04 ` Jorge Timón
2022-01-01 15:45 vjudeu
2022-01-18 16:00 ` Billy Tetrud
2022-01-18 17:22 Prayank
2022-01-19 2:26 ` Billy Tetrud
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox