I agree with you Michael, there is a risk to soft forks and we shouldn't do them too often. We should do them as infrequently as practical. We should strive to one day get to a point where the bitcoin consensus isn't updating at all.
Perhaps we should come to a consensus as a consensus as a community what the minimum time between soft forks should be, and just as importantly, what the minimum time between finalized consensus-change implementation and when we decide community consensus has been achieved.
How long do you think these minimums should be? I'm curious to know everyone's answer to this. I would think of these like I think about changes to how I think national law should be changed: slowly and carefully. There should be sufficient time for everyone to have a chance in their busy lives to take the time to look at it, if they care, so they can raise objections. I think the minimum time between a soft fork implementation finalization and determining consensus should be maybe a year. And the minimum time between soft forks should probably be something like 5 years. This would mean that people only have to worry about paying attention to what might happen with bitcoin consensus once every 5 years, and would get a year-long window to do it. And if there isn't sufficient consensus, or people bring up objections at the last minute, that should be acceptable and it should further delay the deployment.
I think a lot of folks on here are rightly concerned about compromise bundles where multiple mediocre proposals are put together to basically incentivize some people to accept something they don't want in order to receive something they do want (eg what Jeremy quoted Matt Corallo about). But I don't think that's what Michael was suggesting at all. That kind of compromise happens in the
decision making process. My understanding of what Michael was saying is that releasing a soft fork should *not* be within the decision making process at all. The decision making process should have already happened.
If you have consensus changes A and B, Michael was saying that each consensus change proposal should go through a community vetting process that determines that there is widespread supermajority support for it *before* it is even merged into the code (ie master, or some equivalent this-will-be-deployed branch). It should have a final implementation that has been tested at all levels *before* its merged to master. And only then should it potentially be bundled. After all testing has already been done, after sufficient consensus has already been determined.
@Keagan
> When we start to bundle things, we amplify the community resources needed to do review, not reduce them.
I think my above 2 paragraphs address this. I agree we don't want to review these proposals together, they should be reviewed separately. And I don't think Michael was suggesting otherwise.
> the protocol itself adopting a tendency to activate unrelated proposals in bundles is a recipe for disaster.
Activating multiple consensus changes in a bundle is far safer than having multiple separate in-flight soft forks at once. With multiple in-flight soft forks, you have many combinations of what might happen (and therefore what needs to be tested beforehand). Just 3 in-flight soft forks means 9 cases: nine orders of what might happen. All those combinations must be exhaustively tested as all consensus changes must be. This is far more work, more complicated, and more error prone than bundling them together in one soft fork.
@Prayank
> However I am sure there are lot of people who still think miners vote during signaling. ... I could not think of any solution to solve this problem.
One solution is that we could be a lot more direct about how decisions are made. There's been a lot of rhetoric around UASF and how the economic majority is really who's running the show. If that's the case, why not make that explicitly? Why not actually ask users to sign a petition saying they support a particular consensus change? This could be done with actual signatures by keys connected to UTXOs so we can see the economic weight of the petition. We would probably need to have a new address format to prevent problems related to public key exposure (eg by having addresses containing two public keys: `hash(hash(spendingkey)+hash(votingkey))` where you can expose the voting key without exposing your spending key). Perhaps this could be another tapleaf.
Doing this could make it very clear how much of the bitcoin world supports a particular change without needing to put anything extra on chain. This clarity would also help the actual miner activation of the software in cases where miners might have incentives not to activate. If it were clear that an overwhelming supermajority wants it activated, miners would be less likely to play games that play off uncertainty. It would also dispel the idea that miners or developers decide how bitcoin changes.
> If you don't like the reduction of the block subsidy, well that's a much bigger problem.
It is reversible, because you can also increase the block subsidy by using another kind of soft-fork. For example, you can create spendable outputs with zero satoshis. In this way, old nodes will accept that silently, but new nodes can check something more, because you can specify somewhere else, what is the "real" amount. Finally, if all nodes will upgrade, you will end up in a network, where all transactions spend zero satoshi inputs, create zero satoshi outputs and have zero fee. Old nodes would accept all of that, but new nodes would really see, what is going on, and they will check that all rules are met, and the new subsidy is for example increased x1000 (that could lead to the same situation as moving from satoshis to millisatoshis with some hard-fork, but doing that kind of change with a soft-fork is safer).
>
But whether or not it is a basic principle of general software engineering kind of misses the point. Security critical software clearly isn't engineered in the same way as a new social media app. Bugs are easily reverted in a new social media app.On top of that we aren't just dealing with security critical software. One of the most important objectives is to keep all the nodes on the network in consensus. Introducing a consensus change before we are comfortable there is community consensus for it is a massive effective bug in itself. The network can split in multiple ways e.g. part of the network disagrees on whether to activate the consensus change, part of the network disagrees on how to resist that consensus change, part of the network disagrees on how to activate that consensus change etc
> A consensus change is extremely hard to revert and probably requires a hard fork, a level of central coordination we generally attempt to avoid and a speed of deployment that we also attempt to avoid.
This seems to assert the idea that soft forks are all the same: they are not. For instance a soft fork, lowering the block subsidy is completely different than changing the semantics of an OP_NOP to have semantics that may reject a subset of the witnesses that attest to the transactions permissibility. As a result, reversion means two entirely different things in these contexts. While a strict reversion of both soft forks is by definition a hard fork, the requirement of reversion as a result of undesired behavior is not the same. In the case of opcodes, there is almost never a requirement to revert it. If you don't like the way the opcodes behave, then you just don't use them. If you don't like the reduction of the block subsidy, well that's a much bigger problem.
I make this point to elucidate the idea that we cannot treat SoftForks™ as a single monolithic idea. Perhaps we need to come up with better terminology to be specific about what each fork actually is. The soft vs. hard distinction is a critical one but it is not enough and treating soft forks that are noninvasive such as OP_NOP tightenings. This has been proposed before [1], and while I do not necessarily think the terms cited are necessarily complete, they admit the low resolution of our current terminology.
> Soft fork features can (and should) obviously be tested thoroughly on testnet, signet, custom signets, sidechains etc on a standalone basis and a bundled basis.
I vehemently disagree that any consensus changes should be bundled, especially when it comes to activation parameters. When we start to bundle things, we amplify the community resources needed to do review, not reduce them. I suspect your opinion here is largely informed by your frustration with the Taproot Activation procedure that you underwent earlier this year. This is understandable. However, let me present the alternative case. If we start to bundle features, the review of the features gets significantly harder. As the Bitcoin project scales, the ability of any one developer to understand the entire codebase declines. Bundling changes reduces the number of people who are qualified to review a particular proposal, and even worse, intimidates people who may be willing and able to review logically distinct portions of the proposal, resulting in lower amounts of review overall. This will likely have the opposite effect of what you seem to desire. BIP8 and BIP9 give us the ability to have multiple independent soft forks in flight at once. Choosing to bundle them instead makes little sense when we do not have to. Bundling them will inevitably degenerate into political horse trading and everyone will be worse off for it.
> part of the network disagrees on whether to activate the consensus change, part of the network disagrees on how to resist that consensus change, part of the network disagrees on how to activate that consensus change etc
Disagreements, and by extension, forks are a part of Bitcoin. What is important is that they are well defined and clean. This is the reason why the mandatory signaling period exists in BIP8/9, so that clients that intend to reject the soft fork change have a very easy means of doing so in a clean break where consensus is clearly divergent. In accordance with this, consensus changes should be sequenced so that people can decide which sides of the forks they want to follow and that the economic reality can reorganize around that. If choose to bundle them, you have one of two outcomes: either consensus atomizes into a mist where people have different ideas of which subsets of a soft fork bundle they want to adopt, or what likely comes after is a reconvergence on the old client with none of the soft fork rules in place. This will lead to significantly more confusion as well given that with sufficient miner consensus some of the rules may stick anyway even if the rest of the user base reconverges on the old client.
It is quite likely less damaging to consensus to have frequent but strictly sequenced soft forks so that if one of the new rules is contentious the break can happen cleanly. That said, if Core or any other client wishes to cut a release of the software with the parameters bundled into a single release, that is a significantly more palatable state of affairs, as you can still pipeline signaling and activation. However, the protocol itself adopting a tendency to activate unrelated proposals in bundles is a recipe for disaster.
Respectfully,
Keagan
> Interesting discussion. Correct me if I'm wrong: but putting too many features together in one shot just can't make things harder to debug in production if something very unexpected happens. It's a basic principle of software engineering.
Soft fork features can (and should) obviously be tested thoroughly on testnet, signet, custom signets, sidechains etc on a standalone basis and a bundled basis. But whether or not it is a basic principle of general software engineering kind of misses the point. Security critical software clearly isn't engineered in the same way as a new social media app. Bugs are easily reverted in a new social media app. A consensus change is extremely hard to revert and probably requires a hard fork, a level of central coordination we generally attempt to avoid and a speed of deployment that we also attempt to avoid. On top of that we aren't just dealing with security critical software. One of the most important objectives is to keep all the nodes on the network in consensus. Introducing a consensus change before we are comfortable there is community consensus for it is a massive effective bug in itself. The network can split in multiple ways e.g. part of the network disagrees on whether to activate the consensus change, part of the network disagrees on how to resist that consensus change, part of the network disagrees on how to activate that consensus change etc
In addition, a social media app can experiment in production whether Feature A works, whether Feature B works or whether Feature A and B work best together. In Bitcoin if we activate consensus Feature A, later decide we want consensus Feature B but find out that by previously activating Feature A we can't have Feature B (it is now unsafe to activate it) or its design now has to be suboptimal because we have to ensure it can safely work in the presence of Feature A we have made a mistake by activating Feature A in the first place. Decentralized security critical consensus changes are an emerging field in itself and really can't be treated like any other software project. This will become universally understood I'm sure over time.
--
Michael Folkson
Email: michaelfolkson at protonmail.com
Keybase: michaelfolkson
PGP: 43ED C999 9F85 1D40 EAF4 9835 92D6 0159 214C FEE3
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Friday, October 15th, 2021 at 1:43 AM, Felipe Micaroni Lalli via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
Interesting discussion. Correct me if I'm wrong: but putting too many features together in one shot just can't make things harder to debug in production if something very unexpected happens. It's a basic principle of software engineering.
Change. Deploy. Nothing bad happened? Change it a little more. Deployment.
Or: Change, change, change. Deploy. Did something bad happen? What change caused the problem?
On Mon, Oct 11, 2021 at 12:12:58PM -0700, Jeremy via bitcoin-dev wrote:
> > ... in this post I will argue against frequent soft forks with a single or
> minimal
> > set of features and instead argue for infrequent soft forks with batches
> > of features.
> I think this type of development has been discussed in the past and has been
> rejected.
> AJ: - improvements: changes might not make everyone better off, but we
> don't want changes to screw anyone over either -- pareto
> improvements in economics, "first, do no harm", etc. (if we get this
> right, there's no need to make compromises and bundle multiple
> flawed proposals so that everyone's an equal mix of happy and
> miserable)
I don't think your conclusion above matches my opinion, for what it's
worth.
If you've got two features, A and B, where the game theory is:
If A happens, I'm +100, You're -50
If B happens, I'm -50, You're +100
then even though A+B is +50, +50, then I do think the answer should
generally be "think harder and come up with better proposals" rather than
"implement A+B as a bundle that makes us both +50".
_But_ if the two features are more like:
If C happens, I'm +100, You're +/- 0
If D happens, I'm +/- 0, You're +100
then I don't have a problem with bundling them together as a single
simultaneous activation of both C and D.
Also, you can have situations where things are better together,
that is:
If E happens, we're both at +100
If F happens, we're both at +50
If E+F both happen, we're both at +9000
In general, I think combining proposals when the combination is better
than the individual proposals were is obviously good; and combining
related proposals into a single activation can be good if it is easier
to think about the ideas as a set.
It's only when you'd be rejecting the proposal on its own merits that
I think combining it with others is a bad idea in principle.
For specific examples, we bundled schnorr, Taproot, MAST, OP_SUCCESSx
and CHECKSIGADD together because they do have synergies like that; we
didn't bundle ANYPREVOUT and graftroot despite the potential synergies
because those features needed substantially more study.
The nulldummy soft-fork (bip 147) was deployed concurrently with
the segwit soft-fork (bip 141, 143), but I don't think there was any
particular synergy or need for those things to be combined, it just
reduced the overhead of two sets of activation signalling to one.
Note that the implementation code for nulldummy had already been merged
and were applied as relay policy well before activation parameters were
defined (May 2014 via PR#3843 vs Sep 2016 for PR#8636) let alone becoming
an active soft fork.
Cheers,
aj
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev