* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
@ 2022-02-26 7:47 Prayank
2022-02-26 16:18 ` Billy Tetrud
0 siblings, 1 reply; 34+ messages in thread
From: Prayank @ 2022-02-26 7:47 UTC (permalink / raw)
To: ZmnSCPxj; +Cc: Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 735 bytes --]
Good morning ZmnSCPxj,
> Of course, I know of no such technique, but given that a technique (Drivechains) which before would have required its own consensus change, turns out to be implementable inside recursive covenants, then I wonder if there are other things that would have required their own consensus change that are now *also* implementable purely in recursive covenants.
Agree. I would be interested to know what is NOT possible once we have recursive covenants.
> if there is *now* consensus that Drivechains are not bad, go ahead, add recursive covenants (but please can we add `SIGHASH_NOINPUT` and `OP_CTV` first?)
Agree and I think everything can be done in separate soft forks.
--
Prayank
A3B1 E430 2298 178F
[-- Attachment #2: Type: text/html, Size: 1290 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-26 7:47 [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT Prayank
@ 2022-02-26 16:18 ` Billy Tetrud
0 siblings, 0 replies; 34+ messages in thread
From: Billy Tetrud @ 2022-02-26 16:18 UTC (permalink / raw)
To: Prayank, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 2348 bytes --]
> If Drivechains are bad for whatever reason, we should not add recursive
covenants.
Bad "for who" was the crux of my question to you. Even if drivechains are
always bad for their users, I don't think that's a good enough reason to
block things that could allow people to build user-space drivechains, as
long as it doesn't negatively affect normal Bitcoin users.
> Drivechains are not a scaling solution
I generally agree, more of a laboratory where many things (including
scaling solutions) can be tested.
> Principle of Least Power.
A concern is that, since it turns out recursive covenants are sufficient to
implement Drivechains, recursive covenants may also enable *other*
techniques, currently unknown, which may have negative effects on Bitcoin,
or which would be considered undesirable by a significant section of the
userbase.
I think the principle of least power is a good one, but it cannot be dogma.
I think your point about unknown consequences is reasonable and a study on
that kind of thing would be quite valuable. The community has discussed it
multiple times in the past, and so at least some thought has gone into it,
with nothing very strong in opposition that I know of. Has anyone made a
good summary/study of the kinds of things recursive covenants allows?
On Sat, Feb 26, 2022, 02:35 Prayank via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> Good morning ZmnSCPxj,
>
> > Of course, I know of no such technique, but given that a technique
> (Drivechains) which before would have required its own consensus change,
> turns out to be implementable inside recursive covenants, then I wonder if
> there are other things that would have required their own consensus change
> that are now *also* implementable purely in recursive covenants.
>
>
> Agree. I would be interested to know what is NOT possible once we have
> recursive covenants.
>
> > if there is *now* consensus that Drivechains are not bad, go ahead, add
> recursive covenants (but please can we add `SIGHASH_NOINPUT` and `OP_CTV`
> first?)
>
>
> Agree and I think everything can be done in separate soft forks.
>
>
>
>
> --
> Prayank
>
> A3B1 E430 2298 178F
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 4198 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
@ 2022-02-14 2:40 Lucky Star
0 siblings, 0 replies; 34+ messages in thread
From: Lucky Star @ 2022-02-14 2:40 UTC (permalink / raw)
To: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 2198 bytes --]
Hello,
I'm opposed to recursive covenants because they allow the government to _gradually_ restrict all bitcoins.
Without covenants, other miners can fork to a free blockchain, if the government tells miners each transaction to be added in the block. Thus the government cannot impose desires on the Bitcoin community. With covenants, the government gradually forces all companies to use the permissible covenants. There is no free blockchain, and the government controls more bitcoins each day.
Bitcoin experts Greg Maxwell and Peter Todd explained this reason and many others on the forum.[1] More experts also agreed, and it's common knowledge. I strongly recommend to support the OP_CHECKTEMPLATEVERIFY. It is well reviewed, and it protects the Bitcoin community from the bad effects of covenants. With OP_CHECKTEMPLATEVERIFY, we achieve the best of both worlds.
With best regards,
Lucky Star
[1] Maxwell, Greg. "CoinCovenants using SCIP signatures, an amusingly bad idea." https://bitcointalk.org/index.php?topic=278122.0;all
> On Mon, Feb 07, 2022 at 08:34:30PM -0800, Jeremy Rubin via bitcoin-dev wrote:
> > Whether [recursive covenants] is an issue or not precluding this sort
> > of design or not, I defer to others.
>
>
> For reference, I believe the last time the merits of allowing recursive
> covenants was discussed at length on this list[1], not a single person
> replied to say that they were opposed to the idea.
>
>
> I would like to suggest that anyone opposed to recursive covenants speak
> for themselves (if any intelligent such people exist). Citing the risk
> of recursive covenants without presenting a credible argument for the
> source of that risk feels to me like (at best) stop energy[2] and (at
> worst) FUD.
>
>
> -Dave
>
>
> [1] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-July/019203.html
> [2] http://radio-weblogs.com/0107584/stories/2002/05/05/stopEnergyByDaveWiner.html
> (thanks to AJ who told me about stop energy one time when I was
> producing it)
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[-- Attachment #2: Type: text/html, Size: 5765 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
@ 2022-01-26 17:20 Russell O'Connor
2022-02-08 3:40 ` Rusty Russell
0 siblings, 1 reply; 34+ messages in thread
From: Russell O'Connor @ 2022-01-26 17:20 UTC (permalink / raw)
To: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 9132 bytes --]
Recapping the relationship between CTV and ANYPREVOUT::
It is known that there is a significant amount of overlap in the
applications that are enabled by the CTV and ANYPREVOUT proposals despite
the fact that their primary applications (congestion control for CTV and
eltoo lightning channels for ANYPREVOUT) are quite distinct.
In particular, ANYPREVOUT can enable most of the applications of CTV,
albeit with a higher cost. The primary functionality of CTV is to allow a
scriptPubKey to make a commitment to its spending transaction's hash with
the input's TXID excluded from the hash. This exclusion is necessary
because the scriptPubKey is hashed into the input's TXID, and including the
TXID would cause a cycle of hash commitments, which is impossible to
construct. On the other hand, ANYPREVOUT defines a signature hash mode
that similarly excludes the inputs TXID for its purpose of rebindable
signatures.
This means that ANYPREVOUT can mimic most of the properties of CTV by
committing both a public key along with an ANYPREVOUT signature inside
scriptPubKey. In fact, the only reason Bitcoin doesn't have covenants
today is due to this cycle between scriptPubKeys and the TXIDs that occur
in all the sighash modes.
The major differences between simulating CTV via ANYPREVOUT and the actual
CTV proposal is: (1) The cost of simulating CTV. With CTV the spending
transaction is committed using a hash of 32 bytes, while simulating it with
ANYPREVOUT requires 64 bytes for a signature, and 32 bytes for some public
key, plus a few more bytes for various flags. Some of that cost could be
reduced by using the inner public key (1 byte representation) and, if we
had CAT, maybe by assembling the signature from reusable pieces (i.e.
setting the nonce of the commited signature equal to the public key).
The other major difference is: (2) CTV's transaction hash covers values
such as the number of inputs in the transaction and their sequence numbers,
which ANYPREVOUT does not cover. CTV's hash contains enough information so
that when combined with the missing TXIDs, you can compute the TXID of the
spending transaction. In particular if the number of inputs is committed
to being 1, once the scriptpubkey's transaction id is known and committed
to the blockchain, the TXID of its spending transaction is deducible. And
if that transaction has outputs that have CTV commitments in them, you can
deduce their spending TXIDs in turn. While this is a pretty neat feature,
something that ANYPREVOUT cannot mimic, the main application for it is
listed as using congestion control to fund lightning channels, fixing their
TXIDs in advance of them being placed on chain. However, if ANYPREVOUT
were used to mimic CTV, then likely it would be eltoo channels that would
be funded, and it isn't necessary to know the TXIDs of eltoo channels in
advance in order to use them.
An Alternative Proposal::
Given the overlap in functionality between CTV and ANYPREVOUT, I think it
makes sense to decompose their operations into their constituent pieces and
reassemble their behaviour programmatically. To this end, I'd like to
instead propose OP_TXHASH and OP_CHECKSIGFROMSTACKVERIFY.
OP_TXHASH would pop a txhash flag from the stack and compute a (tagged)
txhash in accordance with that flag, and push the resulting hash onto the
stack.
OP_CHECKSIGFROMSTACKVERIFY would pop a pubkey, message, and signature from
the stack and fail if the signature does not verify on that message.
CTV and TXHASH have roughly equivalent functionality. 'CTV DROP' can be
simulated by '<ctv_style_flag> TXHASH EQUALVERIFY'. The reverse is also
true where '<ctv_style_flag> TXHASH' can be simulated by CTV by
'<ctv-result-from-witness-stack> CTV', however, as you can see, simulating
TXHASH from CTV is much more expensive than the other way around, because
the resulting 32-byte hash result must be included as part of the witness
stack.
'<anyprevout-pubkey> CHECKSIGVERIFY can be simulated by '<apo_style_flag>
TXHASH <pubkey> CHECKSIGFROMSTACKVERIFY'. Here we see the advantage of
pushing the hash value onto the stack. APO can be simulated without
needing to include a copy of the resulting txhash inside the witness data.
In addition to the CTV and ANYPREVOUT applications, with
CHECKSIGFROMSTACKVERIFY we can verify signatures on arbitrary messages
signed by oracles for oracle applications. This is where we see the
benefit of decomposing operations into primitive pieces. By giving users
the ability to program their own use cases from components, we get more
applications out of fewer op codes!
Caveats::
First, I acknowledge that replicating the behaviour of CTV and ANYPREVOUT
does cost a few more bytes than using the custom purpose built proposals
themselves. That is the price to be paid when we choose the ability to
program solutions from pieces. But we get to reap the advantages of being
able to build more applications from these pieces.
Unlike CTV, TXHASH is not NOP-compatable and can only be implemented within
tapscript. In particular, bare CTV isn't possible with this proposal.
However, this proposal doesn't preclude the possibility of having CTV added
to legacy script in while having TXHASH added to tapscript.
For similar reasons, TXHASH is not amenable to extending the set of txflags
at a later date. In theory, one could have TXHASH abort-with-success when
encountering an unknown set of flags. However, this would make analyzing
tapscript much more difficult. Tapscripts would then be able to abort with
success or failure depending on the order script fragments are assembled
and executed, and getting the order incorrect would be catastrophic. This
behavior is manifestly different from the current batch of OP_SUCCESS
opcodes that abort-with-success just by their mere presence, whether they
would be executed or not.
I believe the difficulties with upgrading TXHASH can be mitigated by
designing a robust set of TXHASH flags from the start. For example having
bits to control whether (1) the version is covered; (2) the locktime is
covered; (3) txids are covered; (4) sequence numbers are covered; (5) input
amounts are covered; (6) input scriptpubkeys are covered; (7) number of
inputs is covered; (8) output amounts are covered; (9) output scriptpubkeys
are covered; (10) number of outputs is covered; (11) the tapbranch is
covered; (12) the tapleaf is covered; (13) the opseparator value is
covered; (14) whether all, one, or no inputs are covered; (15) whether all,
one or no outputs are covered; (16) whether the one input position is
covered; (17) whether the one output position is covered; (18) whether the
sighash flags are covered or not (note: whether or not the sighash flags
are or are not covered must itself be covered). Possibly specifying which
input or output position is covered in the single case and whether the
position is relative to the input's position or is an absolute position.
That all said, even if other txhash flag modes are needed in the future,
adding TXHASH2 always remains an option.
Interactions with potential future opcodes::
We should give some consideration on how these opcodes may interact with
future opcodes such as CAT, rolling SHA256 opcodes, or how it might
interface with other covenant opcodes that may do things like, directly
push input or output amounts onto the stack for computation purposes,
opcodes which have been added to the Elements project.
With CAT and/or rolling SHA256 opcodes and/or existing SHA256 opcodes, the
CHECKSIGFROMSTACKVERIFY could verify signatures on programmatically
assembled messages. Also, in combination with multiple calls to TXHASH,
could be used to create signatures that commit to complex subsets of
transaction data.
If new opcodes are added to push parts of the transaction data direction
onto the stack, e.g. OP_INSPECTOUTPUTVALUE, there is perhaps concern that
they would obsolete TXHASH, since, in the presence of rolling SHA256
opcodes, TXHASH could be simulated. However, given that TXHASH can
compactly create a hash of large portions of transaction data, it seems
unlikely that TXHASH would fall into disuse. Also, a combination of TXHASH
and transaction introspection opcodes can be used to build "*subtractive
covenants*".
The usual way of building a covenant, which we will call "*additive *
*covenants*", is to push all the parts of the transaction data you would
like to fix onto the stack, hash it all together, and verify the resulting
hash matches a fixed value. Another way of building covenants, which we
will call "*subtractive covenants*", is to push all the parts of the
transaction data you would like to remain free onto the stack. Then use
rolling SHA256 opcodes starting from a fixed midstate that commits to a
prefix of the transaction hash data. The free parts are hashed into that
midstate. Finally, the resulting hash value is verified to match a value
returned by TXHASH. The ability to nicely build subtractive covenants
depends on the details of how the TXHASH hash value is constructed,
something that I'm told CTV has given consideration to.
[-- Attachment #2: Type: text/html, Size: 10040 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-01-26 17:20 [bitcoin-dev] " Russell O'Connor
@ 2022-02-08 3:40 ` Rusty Russell
2022-02-08 4:34 ` Jeremy Rubin
0 siblings, 1 reply; 34+ messages in thread
From: Rusty Russell @ 2022-02-08 3:40 UTC (permalink / raw)
To: Russell O'Connor, Bitcoin Protocol Discussion,
Bitcoin Protocol Discussion
Russell O'Connor via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> writes:
> Given the overlap in functionality between CTV and ANYPREVOUT, I think it
> makes sense to decompose their operations into their constituent pieces and
> reassemble their behaviour programmatically. To this end, I'd like to
> instead propose OP_TXHASH and OP_CHECKSIGFROMSTACKVERIFY.
>
> OP_TXHASH would pop a txhash flag from the stack and compute a (tagged)
> txhash in accordance with that flag, and push the resulting hash onto the
> stack.
It may be worth noting that OP_TXHASH can be further decomposed into
OP_TX (and OP_TAGGEDHASH, or just reuse OP_SHA256).
OP_TX would place the concatenated selected fields onto the stack
(rather than hashing them) This is more compact for some tests
(e.g. testing tx version for 2 is "OP_TX(version) 1 OP_EQUALS" vs
"OP_TXHASH(version) 012345678...aabbccddeeff OP_EQUALS"), and also range
testing (e.g amount less than X or greater than X, or less than 3 inputs).
> I believe the difficulties with upgrading TXHASH can be mitigated by
> designing a robust set of TXHASH flags from the start. For example having
> bits to control whether (1) the version is covered; (2) the locktime is
> covered; (3) txids are covered; (4) sequence numbers are covered; (5) input
> amounts are covered; (6) input scriptpubkeys are covered; (7) number of
> inputs is covered; (8) output amounts are covered; (9) output scriptpubkeys
> are covered; (10) number of outputs is covered; (11) the tapbranch is
> covered; (12) the tapleaf is covered; (13) the opseparator value is
> covered; (14) whether all, one, or no inputs are covered; (15) whether all,
> one or no outputs are covered; (16) whether the one input position is
> covered; (17) whether the one output position is covered; (18) whether the
> sighash flags are covered or not (note: whether or not the sighash flags
> are or are not covered must itself be covered). Possibly specifying which
> input or output position is covered in the single case and whether the
> position is relative to the input's position or is an absolute position.
These easily map onto OP_TX, "(1) the version is pushed as u32, (2) the
locktime is pushed as u32, ...".
We might want to push SHA256() of scripts instead of scripts themselves,
to reduce possibility of DoS.
I suggest, also, that 14 (and similarly 15) be defined two bits:
00 - no inputs
01 - all inputs
10 - current input
11 - pop number from stack, fail if >= number of inputs or no stack elems.
Cheers,
Rusty.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-08 3:40 ` Rusty Russell
@ 2022-02-08 4:34 ` Jeremy Rubin
2022-02-11 0:55 ` [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was " David A. Harding
0 siblings, 1 reply; 34+ messages in thread
From: Jeremy Rubin @ 2022-02-08 4:34 UTC (permalink / raw)
To: Rusty Russell, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 3148 bytes --]
Rusty,
Note that this sort of design introduces recursive covenants similarly to
how I described above.
Whether that is an issue or not precluding this sort of design or not, I
defer to others.
Best,
Jeremy
On Mon, Feb 7, 2022 at 7:57 PM Rusty Russell via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> Russell O'Connor via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org>
> writes:
> > Given the overlap in functionality between CTV and ANYPREVOUT, I think it
> > makes sense to decompose their operations into their constituent pieces
> and
> > reassemble their behaviour programmatically. To this end, I'd like to
> > instead propose OP_TXHASH and OP_CHECKSIGFROMSTACKVERIFY.
> >
> > OP_TXHASH would pop a txhash flag from the stack and compute a (tagged)
> > txhash in accordance with that flag, and push the resulting hash onto the
> > stack.
>
> It may be worth noting that OP_TXHASH can be further decomposed into
> OP_TX (and OP_TAGGEDHASH, or just reuse OP_SHA256).
>
> OP_TX would place the concatenated selected fields onto the stack
> (rather than hashing them) This is more compact for some tests
> (e.g. testing tx version for 2 is "OP_TX(version) 1 OP_EQUALS" vs
> "OP_TXHASH(version) 012345678...aabbccddeeff OP_EQUALS"), and also range
> testing (e.g amount less than X or greater than X, or less than 3 inputs).
>
> > I believe the difficulties with upgrading TXHASH can be mitigated by
> > designing a robust set of TXHASH flags from the start. For example
> having
> > bits to control whether (1) the version is covered; (2) the locktime is
> > covered; (3) txids are covered; (4) sequence numbers are covered; (5)
> input
> > amounts are covered; (6) input scriptpubkeys are covered; (7) number of
> > inputs is covered; (8) output amounts are covered; (9) output
> scriptpubkeys
> > are covered; (10) number of outputs is covered; (11) the tapbranch is
> > covered; (12) the tapleaf is covered; (13) the opseparator value is
> > covered; (14) whether all, one, or no inputs are covered; (15) whether
> all,
> > one or no outputs are covered; (16) whether the one input position is
> > covered; (17) whether the one output position is covered; (18) whether
> the
> > sighash flags are covered or not (note: whether or not the sighash flags
> > are or are not covered must itself be covered). Possibly specifying
> which
> > input or output position is covered in the single case and whether the
> > position is relative to the input's position or is an absolute position.
>
> These easily map onto OP_TX, "(1) the version is pushed as u32, (2) the
> locktime is pushed as u32, ...".
>
> We might want to push SHA256() of scripts instead of scripts themselves,
> to reduce possibility of DoS.
>
> I suggest, also, that 14 (and similarly 15) be defined two bits:
> 00 - no inputs
> 01 - all inputs
> 10 - current input
> 11 - pop number from stack, fail if >= number of inputs or no stack elems.
>
> Cheers,
> Rusty.
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 5173 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-08 4:34 ` Jeremy Rubin
@ 2022-02-11 0:55 ` David A. Harding
2022-02-11 3:42 ` Jeremy Rubin
` (3 more replies)
0 siblings, 4 replies; 34+ messages in thread
From: David A. Harding @ 2022-02-11 0:55 UTC (permalink / raw)
To: Jeremy Rubin, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 942 bytes --]
On Mon, Feb 07, 2022 at 08:34:30PM -0800, Jeremy Rubin via bitcoin-dev wrote:
> Whether [recursive covenants] is an issue or not precluding this sort
> of design or not, I defer to others.
For reference, I believe the last time the merits of allowing recursive
covenants was discussed at length on this list[1], not a single person
replied to say that they were opposed to the idea.
I would like to suggest that anyone opposed to recursive covenants speak
for themselves (if any intelligent such people exist). Citing the risk
of recursive covenants without presenting a credible argument for the
source of that risk feels to me like (at best) stop energy[2] and (at
worst) FUD.
-Dave
[1] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-July/019203.html
[2] http://radio-weblogs.com/0107584/stories/2002/05/05/stopEnergyByDaveWiner.html
(thanks to AJ who told me about stop energy one time when I was
producing it)
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-11 0:55 ` [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was " David A. Harding
@ 2022-02-11 3:42 ` Jeremy Rubin
2022-02-11 17:42 ` James O'Beirne
` (2 subsequent siblings)
3 siblings, 0 replies; 34+ messages in thread
From: Jeremy Rubin @ 2022-02-11 3:42 UTC (permalink / raw)
To: David A. Harding; +Cc: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 2176 bytes --]
I don't have a specific response to share at this moment, but I may make
one later.
But for the sake of elevating the discourse, I'd encourage people
responding this to read through
https://rubin.io/bitcoin/2021/12/04/advent-7/ as I think it has some
helpful terminology and categorizations.
I bring this up because I think that recursion is often given as a
shorthand for "powerful" because the types of operations that support
recursion typically also introduce open ended covenants, unless they are
designed specially not to. As a trivial example a covenant that makes a
coin spendable from itself to itself entirely with no authorization is
recursive but fully enumerated in a sense and not particularly interesting
or useful.
Therefore when responding you might be careful to distinguish if it is just
recursion which you take issue with or open ended or some combination of
properties which severally might be acceptable.
TL;DR there are different properties people might care about that get
lumped in with recursion, it's good to be explicit if it is a recursion
issue or something else.
Cheers,
Jeremy
On Thu, Feb 10, 2022, 4:55 PM David A. Harding <dave@dtrt.org> wrote:
> On Mon, Feb 07, 2022 at 08:34:30PM -0800, Jeremy Rubin via bitcoin-dev
> wrote:
> > Whether [recursive covenants] is an issue or not precluding this sort
> > of design or not, I defer to others.
>
> For reference, I believe the last time the merits of allowing recursive
> covenants was discussed at length on this list[1], not a single person
> replied to say that they were opposed to the idea.
>
> I would like to suggest that anyone opposed to recursive covenants speak
> for themselves (if any intelligent such people exist). Citing the risk
> of recursive covenants without presenting a credible argument for the
> source of that risk feels to me like (at best) stop energy[2] and (at
> worst) FUD.
>
> -Dave
>
> [1]
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-July/019203.html
> [2]
> http://radio-weblogs.com/0107584/stories/2002/05/05/stopEnergyByDaveWiner.html
> (thanks to AJ who told me about stop energy one time when I was
> producing it)
>
>
[-- Attachment #2: Type: text/html, Size: 3159 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-11 0:55 ` [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was " David A. Harding
2022-02-11 3:42 ` Jeremy Rubin
@ 2022-02-11 17:42 ` James O'Beirne
2022-02-11 18:12 ` digital vagabond
2022-02-18 7:34 ` ZmnSCPxj
2022-02-23 11:28 ` ZmnSCPxj
3 siblings, 1 reply; 34+ messages in thread
From: James O'Beirne @ 2022-02-11 17:42 UTC (permalink / raw)
To: David A. Harding, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 3169 bytes --]
I don't oppose recursive covenants per se, but in prior posts I have
expressed uncertainty about proposals that enable more "featureful"
covenants by adding more kinds of computation into bitcoin script.
Not that anyone here is necessarily saying otherwise, but I am very
interested in limiting operations in bitcoin script to "verification" (vs.
"computation") to the extent practical, and instead encouraging general
computation be done off-chain. This of course isn't a new observation and I
think the last few years have been very successful to that effect, e.g. the
popularity of the "scriptless scripts" idea and Taproot's emphasis on
embedding computational artifacts in key tweaks.
My (maybe unfounded?) worry about opcodes like OP_CAT and OP_TX is that
more logic will live in script than is necessary, and so the burden to
verify the chain may grow and the extra "degrees of freedom" in script may
make it harder to reason about. But I guess at this point there aren't
alternative means to construct new kinds of sighashes that are necessary
for some interesting covenants.
One thing I like about CTV is that it buys a lot of functionality without
increasing the "surface area" of script's design. In general I think there
is a lot to be said for this "jets"-style approach[0] of codifying the
script operations that you'd actually want to do into single opcodes. This
adds functionality while introducing minimal surface area to script, giving
script implementers more flexibility for, say, optimization. But of course
this comes at the cost of precluding experimentation, and probably
requiring more soft-forking. Though whether the place for script
experimentation using more general-purpose opcodes on the main chain is
another interesting debate...
Sorry for going a little off-topic there.
[0]: https://medium.com/blockstream/simplicity-jets-release-803db10fd589
On Thu, Feb 10, 2022 at 7:55 PM David A. Harding via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> On Mon, Feb 07, 2022 at 08:34:30PM -0800, Jeremy Rubin via bitcoin-dev
> wrote:
> > Whether [recursive covenants] is an issue or not precluding this sort
> > of design or not, I defer to others.
>
> For reference, I believe the last time the merits of allowing recursive
> covenants was discussed at length on this list[1], not a single person
> replied to say that they were opposed to the idea.
>
> I would like to suggest that anyone opposed to recursive covenants speak
> for themselves (if any intelligent such people exist). Citing the risk
> of recursive covenants without presenting a credible argument for the
> source of that risk feels to me like (at best) stop energy[2] and (at
> worst) FUD.
>
> -Dave
>
> [1]
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-July/019203.html
> [2]
> http://radio-weblogs.com/0107584/stories/2002/05/05/stopEnergyByDaveWiner.html
> (thanks to AJ who told me about stop energy one time when I was
> producing it)
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 4303 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-11 17:42 ` James O'Beirne
@ 2022-02-11 18:12 ` digital vagabond
2022-02-12 10:54 ` darosior
2022-02-17 15:15 ` Anthony Towns
0 siblings, 2 replies; 34+ messages in thread
From: digital vagabond @ 2022-02-11 18:12 UTC (permalink / raw)
To: James O'Beirne, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 7010 bytes --]
This is Shinobi (can verify out of band at @brian_trollz on Twitter, I only
signed up to the list with this email to read initially, but feel like I
should reply to this as I think I am one of the only people in this space
who has voiced concerns with recursive covenants).
My concerns don't really center specifically around recursion itself
necessarily, but unbounded recursion in combination with too much
generality/flexibility in what types of conditions future UTXOs can be
encumbered with based on the restriction of such covenants. Forgive the
hand waiving arguments without getting into specific opcodes, but I would
summarize my concerns with a hypothetical construct that I believe would be
incredibly damaging to fungibility. Imagine a covenant design that was
flexible enough to create an encumbrance like this: a script specifies a
specific key in a multisig controlled by some authority figure (or a branch
in the script that would allow unilateral control by such an authority),
and the conditions of the covenant would perpetually require than any spend
from the covenant can only be sent to a script involving that key from said
authority, preventing by consensus any removal of that central authorities
involvement in control over that UTXO. Such a construct would present
dangerous implications to the fungibility of individual UTXOs by
introducing a totally different risk model in being paid with such a coin
compared to any other coin not encumbered by such a condition, and also
potentially introduce a shift in the scope of what a 51% attack could
accomplish in terms of permanent consequences attempting to coerce coins
into such covenants, as opposed to right now only being able to accomplish
censorship or temporary network disruption.
I know that such a walled garden could easily be constructed now with
multisig and restrictions on where coins can be withdrawn to from exchanges
or whatever place they initially purchased from, as is demonstrated by the
implementation of the Asset Management Platform by Blockstream for use on
Liquid with regulated equity tokens, but I think the important distinction
between such non-consensus system designed to enforce such restrictions and
a recursive covenant to accomplish the same is that in the case of a
multisig/non-consensus based system, exit from that restriction is still
possible under the consensus rules of the protocol. If such a construct was
possible to build with a recursive covenant enforced by consensus, coins
encumbered by such a covenant would literally be incapable of escaping
those restrictions without hardforking the protocol, leaving any such UTXOs
permanently non-fungible with ones not encumbered by such conditions.
I'm not that deeply familiar with all the working pieces involved in the
recent TXHASH + CSFS proposal, and whether such a type of overly (IMO)
generalized recursion would be possible to construct, but one of the
reasons CTV does not bother me in terms of such concerns is the inability
to infinitely recurse in such a generalized way given the requirements to
exactly specify the destination of future spends in constructing a chain of
CTV encumbrances. I'd very much appreciate any feedback on my concerns, and
if this side tracks the discussion I apologize, but I felt given the issue
has been mentioned a few times in this thread it was appropriate for me to
voice the concerns here so they could be addressed directly.
On Fri, Feb 11, 2022 at 11:42 AM James O'Beirne via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> I don't oppose recursive covenants per se, but in prior posts I have
> expressed uncertainty about proposals that enable more "featureful"
> covenants by adding more kinds of computation into bitcoin script.
>
> Not that anyone here is necessarily saying otherwise, but I am very
> interested in limiting operations in bitcoin script to "verification" (vs.
> "computation") to the extent practical, and instead encouraging general
> computation be done off-chain. This of course isn't a new observation and I
> think the last few years have been very successful to that effect, e.g. the
> popularity of the "scriptless scripts" idea and Taproot's emphasis on
> embedding computational artifacts in key tweaks.
>
> My (maybe unfounded?) worry about opcodes like OP_CAT and OP_TX is that
> more logic will live in script than is necessary, and so the burden to
> verify the chain may grow and the extra "degrees of freedom" in script may
> make it harder to reason about. But I guess at this point there aren't
> alternative means to construct new kinds of sighashes that are necessary
> for some interesting covenants.
>
> One thing I like about CTV is that it buys a lot of functionality without
> increasing the "surface area" of script's design. In general I think there
> is a lot to be said for this "jets"-style approach[0] of codifying the
> script operations that you'd actually want to do into single opcodes. This
> adds functionality while introducing minimal surface area to script, giving
> script implementers more flexibility for, say, optimization. But of course
> this comes at the cost of precluding experimentation, and probably
> requiring more soft-forking. Though whether the place for script
> experimentation using more general-purpose opcodes on the main chain is
> another interesting debate...
>
> Sorry for going a little off-topic there.
>
> [0]: https://medium.com/blockstream/simplicity-jets-release-803db10fd589
>
>
> On Thu, Feb 10, 2022 at 7:55 PM David A. Harding via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> On Mon, Feb 07, 2022 at 08:34:30PM -0800, Jeremy Rubin via bitcoin-dev
>> wrote:
>> > Whether [recursive covenants] is an issue or not precluding this sort
>> > of design or not, I defer to others.
>>
>> For reference, I believe the last time the merits of allowing recursive
>> covenants was discussed at length on this list[1], not a single person
>> replied to say that they were opposed to the idea.
>>
>> I would like to suggest that anyone opposed to recursive covenants speak
>> for themselves (if any intelligent such people exist). Citing the risk
>> of recursive covenants without presenting a credible argument for the
>> source of that risk feels to me like (at best) stop energy[2] and (at
>> worst) FUD.
>>
>> -Dave
>>
>> [1]
>> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-July/019203.html
>> [2]
>> http://radio-weblogs.com/0107584/stories/2002/05/05/stopEnergyByDaveWiner.html
>> (thanks to AJ who told me about stop energy one time when I was
>> producing it)
>>
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 8671 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-11 18:12 ` digital vagabond
@ 2022-02-12 10:54 ` darosior
2022-02-12 15:59 ` Billy Tetrud
2022-02-17 15:15 ` Anthony Towns
1 sibling, 1 reply; 34+ messages in thread
From: darosior @ 2022-02-12 10:54 UTC (permalink / raw)
To: digital vagabond, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 7922 bytes --]
> Such a construct would present dangerous implications to the fungibility of individual UTXOs by introducing a totally different risk model in being paid with such a coin compared to any other coin not encumbered by such a condition
How is that different from being paid in an altcoin?
It seems to me that being able to say "sorry, your money isn't good here" is at the heart of Bitcoin's security (similarly to enforcing the network rules with your node). If someone can coerce you into using another currency, you've already lost.
Now there is left the influence on the system of an user being coerced into using gov coin (on another chain) or an encumbered bit coin. Sure the latter would decrease the supply available, but that's already possible to do today.
------- Original Message -------
Le vendredi 11 février 2022 à 7:12 PM, digital vagabond via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> a écrit :
> This is Shinobi (can verify out of band at @brian_trollz on Twitter, I only signed up to the list with this email to read initially, but feel like I should reply to this as I think I am one of the only people in this space who has voiced concerns with recursive covenants).
>
> My concerns don't really center specifically around recursion itself necessarily, but unbounded recursion in combination with too much generality/flexibility in what types of conditions future UTXOs can be encumbered with based on the restriction of such covenants. Forgive the hand waiving arguments without getting into specific opcodes, but I would summarize my concerns with a hypothetical construct that I believe would be incredibly damaging to fungibility. Imagine a covenant design that was flexible enough to create an encumbrance like this: a script specifies a specific key in a multisig controlled by some authority figure (or a branch in the script that would allow unilateral control by such an authority), and the conditions of the covenant would perpetually require than any spend from the covenant can only be sent to a script involving that key from said authority, preventing by consensus any removal of that central authorities involvement in control over that UTXO. Such a construct would present dangerous implications to the fungibility of individual UTXOs by introducing a totally different risk model in being paid with such a coin compared to any other coin not encumbered by such a condition, and also potentially introduce a shift in the scope of what a 51% attack could accomplish in terms of permanent consequences attempting to coerce coins into such covenants, as opposed to right now only being able to accomplish censorship or temporary network disruption.
>
> I know that such a walled garden could easily be constructed now with multisig and restrictions on where coins can be withdrawn to from exchanges or whatever place they initially purchased from, as is demonstrated by the implementation of the Asset Management Platform by Blockstream for use on Liquid with regulated equity tokens, but I think the important distinction between such non-consensus system designed to enforce such restrictions and a recursive covenant to accomplish the same is that in the case of a multisig/non-consensus based system, exit from that restriction is still possible under the consensus rules of the protocol. If such a construct was possible to build with a recursive covenant enforced by consensus, coins encumbered by such a covenant would literally be incapable of escaping those restrictions without hardforking the protocol, leaving any such UTXOs permanently non-fungible with ones not encumbered by such conditions.
>
> I'm not that deeply familiar with all the working pieces involved in the recent TXHASH + CSFS proposal, and whether such a type of overly (IMO) generalized recursion would be possible to construct, but one of the reasons CTV does not bother me in terms of such concerns is the inability to infinitely recurse in such a generalized way given the requirements to exactly specify the destination of future spends in constructing a chain of CTV encumbrances. I'd very much appreciate any feedback on my concerns, and if this side tracks the discussion I apologize, but I felt given the issue has been mentioned a few times in this thread it was appropriate for me to voice the concerns here so they could be addressed directly.
>
> On Fri, Feb 11, 2022 at 11:42 AM James O'Beirne via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> I don't oppose recursive covenants per se, but in prior posts I have expressed uncertainty about proposals that enable more "featureful" covenants by adding more kinds of computation into bitcoin script.
>>
>> Not that anyone here is necessarily saying otherwise, but I am very interested in limiting operations in bitcoin script to "verification" (vs. "computation") to the extent practical, and instead encouraging general computation be done off-chain. This of course isn't a new observation and I think the last few years have been very successful to that effect, e.g. the popularity of the "scriptless scripts" idea and Taproot's emphasis on embedding computational artifacts in key tweaks.
>>
>> My (maybe unfounded?) worry about opcodes like OP_CAT and OP_TX is that more logic will live in script than is necessary, and so the burden to verify the chain may grow and the extra "degrees of freedom" in script may make it harder to reason about. But I guess at this point there aren't alternative means to construct new kinds of sighashes that are necessary for some interesting covenants.
>>
>> One thing I like about CTV is that it buys a lot of functionality without increasing the "surface area" of script's design. In general I think there is a lot to be said for this "jets"-style approach[0] of codifying the script operations that you'd actually want to do into single opcodes. This adds functionality while introducing minimal surface area to script, giving script implementers more flexibility for, say, optimization. But of course this comes at the cost of precluding experimentation, and probably requiring more soft-forking. Though whether the place for script experimentation using more general-purpose opcodes on the main chain is another interesting debate...
>>
>> Sorry for going a little off-topic there.
>>
>> [0]: https://medium.com/blockstream/simplicity-jets-release-803db10fd589
>>
>> On Thu, Feb 10, 2022 at 7:55 PM David A. Harding via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>>> On Mon, Feb 07, 2022 at 08:34:30PM -0800, Jeremy Rubin via bitcoin-dev wrote:
>>>> Whether [recursive covenants] is an issue or not precluding this sort
>>>> of design or not, I defer to others.
>>>
>>> For reference, I believe the last time the merits of allowing recursive
>>> covenants was discussed at length on this list[1], not a single person
>>> replied to say that they were opposed to the idea.
>>>
>>> I would like to suggest that anyone opposed to recursive covenants speak
>>> for themselves (if any intelligent such people exist). Citing the risk
>>> of recursive covenants without presenting a credible argument for the
>>> source of that risk feels to me like (at best) stop energy[2] and (at
>>> worst) FUD.
>>>
>>> -Dave
>>>
>>> [1] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-July/019203.html
>>> [2] http://radio-weblogs.com/0107584/stories/2002/05/05/stopEnergyByDaveWiner.html
>>> (thanks to AJ who told me about stop energy one time when I was
>>> producing it)
>>>
>>> _______________________________________________
>>> bitcoin-dev mailing list
>>> bitcoin-dev@lists.linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[-- Attachment #2: Type: text/html, Size: 10590 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-12 10:54 ` darosior
@ 2022-02-12 15:59 ` Billy Tetrud
0 siblings, 0 replies; 34+ messages in thread
From: Billy Tetrud @ 2022-02-12 15:59 UTC (permalink / raw)
To: darosior, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 9329 bytes --]
> in the case of a multisig/non-consensus based system, exit from that
restriction is still possible
But why do we care if someone reduces the value of coins they own by
permanently encumbering them in some way? Burning coins permanently
encumbers them so much they can't be spent at all. If the worry is
depleting the supply of sats, don't worry, the amount of value lost by
those encumbered is gained but the rest of the coins. Just like burning,
encumbering your coins in a way that devalues them is a donation to the
rest of us.
Could you clarify what harm there is to those who choose not to accept such
encumbered coins? Or are you just saying that those who do accept such
encumbered coins may be harmed by doing so?
On Sat, Feb 12, 2022, 06:11 darosior via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> Such a construct would present dangerous implications to the fungibility
> of individual UTXOs by introducing a totally different risk model in being
> paid with such a coin compared to any other coin not encumbered by such a
> condition
>
>
> How is that different from being paid in an altcoin?
> It seems to me that being able to say "sorry, your money isn't good here"
> is at the heart of Bitcoin's security (similarly to enforcing the network
> rules with your node). If someone can coerce you into using another
> currency, you've already lost.
>
> Now there is left the influence on the system of an user being coerced
> into using gov coin (on another chain) or an encumbered bit coin. Sure the
> latter would decrease the supply available, but that's already possible to
> do today.
>
> ------- Original Message -------
> Le vendredi 11 février 2022 à 7:12 PM, digital vagabond via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> a écrit :
>
> This is Shinobi (can verify out of band at @brian_trollz on Twitter, I
> only signed up to the list with this email to read initially, but feel like
> I should reply to this as I think I am one of the only people in this space
> who has voiced concerns with recursive covenants).
>
> My concerns don't really center specifically around recursion itself
> necessarily, but unbounded recursion in combination with too much
> generality/flexibility in what types of conditions future UTXOs can be
> encumbered with based on the restriction of such covenants. Forgive the
> hand waiving arguments without getting into specific opcodes, but I would
> summarize my concerns with a hypothetical construct that I believe would be
> incredibly damaging to fungibility. Imagine a covenant design that was
> flexible enough to create an encumbrance like this: a script specifies a
> specific key in a multisig controlled by some authority figure (or a branch
> in the script that would allow unilateral control by such an authority),
> and the conditions of the covenant would perpetually require than any spend
> from the covenant can only be sent to a script involving that key from said
> authority, preventing by consensus any removal of that central authorities
> involvement in control over that UTXO. Such a construct would present
> dangerous implications to the fungibility of individual UTXOs by
> introducing a totally different risk model in being paid with such a coin
> compared to any other coin not encumbered by such a condition, and also
> potentially introduce a shift in the scope of what a 51% attack could
> accomplish in terms of permanent consequences attempting to coerce coins
> into such covenants, as opposed to right now only being able to accomplish
> censorship or temporary network disruption.
>
> I know that such a walled garden could easily be constructed now with
> multisig and restrictions on where coins can be withdrawn to from exchanges
> or whatever place they initially purchased from, as is demonstrated by the
> implementation of the Asset Management Platform by Blockstream for use on
> Liquid with regulated equity tokens, but I think the important distinction
> between such non-consensus system designed to enforce such restrictions and
> a recursive covenant to accomplish the same is that in the case of a
> multisig/non-consensus based system, exit from that restriction is still
> possible under the consensus rules of the protocol. If such a construct was
> possible to build with a recursive covenant enforced by consensus, coins
> encumbered by such a covenant would literally be incapable of escaping
> those restrictions without hardforking the protocol, leaving any such UTXOs
> permanently non-fungible with ones not encumbered by such conditions.
>
> I'm not that deeply familiar with all the working pieces involved in the
> recent TXHASH + CSFS proposal, and whether such a type of overly (IMO)
> generalized recursion would be possible to construct, but one of the
> reasons CTV does not bother me in terms of such concerns is the inability
> to infinitely recurse in such a generalized way given the requirements to
> exactly specify the destination of future spends in constructing a chain of
> CTV encumbrances. I'd very much appreciate any feedback on my concerns, and
> if this side tracks the discussion I apologize, but I felt given the issue
> has been mentioned a few times in this thread it was appropriate for me to
> voice the concerns here so they could be addressed directly.
>
> On Fri, Feb 11, 2022 at 11:42 AM James O'Beirne via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> I don't oppose recursive covenants per se, but in prior posts I have
>> expressed uncertainty about proposals that enable more "featureful"
>> covenants by adding more kinds of computation into bitcoin script.
>>
>> Not that anyone here is necessarily saying otherwise, but I am very
>> interested in limiting operations in bitcoin script to "verification" (vs.
>> "computation") to the extent practical, and instead encouraging general
>> computation be done off-chain. This of course isn't a new observation and I
>> think the last few years have been very successful to that effect, e.g. the
>> popularity of the "scriptless scripts" idea and Taproot's emphasis on
>> embedding computational artifacts in key tweaks.
>>
>> My (maybe unfounded?) worry about opcodes like OP_CAT and OP_TX is that
>> more logic will live in script than is necessary, and so the burden to
>> verify the chain may grow and the extra "degrees of freedom" in script may
>> make it harder to reason about. But I guess at this point there aren't
>> alternative means to construct new kinds of sighashes that are necessary
>> for some interesting covenants.
>>
>> One thing I like about CTV is that it buys a lot of functionality without
>> increasing the "surface area" of script's design. In general I think there
>> is a lot to be said for this "jets"-style approach[0] of codifying the
>> script operations that you'd actually want to do into single opcodes. This
>> adds functionality while introducing minimal surface area to script, giving
>> script implementers more flexibility for, say, optimization. But of course
>> this comes at the cost of precluding experimentation, and probably
>> requiring more soft-forking. Though whether the place for script
>> experimentation using more general-purpose opcodes on the main chain is
>> another interesting debate...
>>
>> Sorry for going a little off-topic there.
>>
>> [0]: https://medium.com/blockstream/simplicity-jets-release-803db10fd589
>>
>>
>> On Thu, Feb 10, 2022 at 7:55 PM David A. Harding via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>>> On Mon, Feb 07, 2022 at 08:34:30PM -0800, Jeremy Rubin via bitcoin-dev
>>> wrote:
>>> > Whether [recursive covenants] is an issue or not precluding this sort
>>> > of design or not, I defer to others.
>>>
>>> For reference, I believe the last time the merits of allowing recursive
>>> covenants was discussed at length on this list[1], not a single person
>>> replied to say that they were opposed to the idea.
>>>
>>> I would like to suggest that anyone opposed to recursive covenants speak
>>> for themselves (if any intelligent such people exist). Citing the risk
>>> of recursive covenants without presenting a credible argument for the
>>> source of that risk feels to me like (at best) stop energy[2] and (at
>>> worst) FUD.
>>>
>>> -Dave
>>>
>>> [1]
>>> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-July/019203.html
>>> [2]
>>> http://radio-weblogs.com/0107584/stories/2002/05/05/stopEnergyByDaveWiner.html
>>> (thanks to AJ who told me about stop energy one time when I was
>>> producing it)
>>>
>>> _______________________________________________
>>> bitcoin-dev mailing list
>>> bitcoin-dev@lists.linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>>
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 12827 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-11 18:12 ` digital vagabond
2022-02-12 10:54 ` darosior
@ 2022-02-17 15:15 ` Anthony Towns
1 sibling, 0 replies; 34+ messages in thread
From: Anthony Towns @ 2022-02-17 15:15 UTC (permalink / raw)
To: digital vagabond, Bitcoin Protocol Discussion
On Fri, Feb 11, 2022 at 12:12:28PM -0600, digital vagabond via bitcoin-dev wrote:
> Imagine a covenant design that was
> flexible enough to create an encumbrance like this: a script specifies a
> specific key in a multisig controlled by some authority figure (or a branch
> in the script that would allow unilateral control by such an authority),
> and the conditions of the covenant would perpetually require than any spend
> from the covenant can only be sent to a script involving that key from said
> authority, preventing by consensus any removal of that central authorities
> involvement in control over that UTXO.
> I know that such a walled garden could easily be constructed now with
> multisig and restrictions on where coins can be withdrawn to from exchanges
> or whatever [...], but I think the important distinction
> between such non-consensus system designed to enforce such restrictions and
> a recursive covenant to accomplish the same is that in the case of a
> multisig/non-consensus based system, exit from that restriction is still
> possible under the consensus rules of the protocol.
I think that sort of encumberance is already possible: you send bitcoin
to an OP_RETURN address and that is registered on some other system as a
way of "minting" coins there (ie, "proof of burn") at which point rules
other than bitcoin's apply. Bitcoin consensus guarantees the value can't
be extracted back out of the OP_RETURN value.
I think spacechains effectively takes up this concept for their one-way
peg:
https://bitcoin.stackexchange.com/questions/100537/what-is-spacechain
https://medium.com/@RubenSomsen/21-million-bitcoins-to-rule-all-sidechains-the-perpetual-one-way-peg-96cb2f8ac302
(I think spacechains requires a covenant construct to track the
single-tx-per-bitcoin-block that commits to the spacechain, but that's
not directly used for the BTC value that was pegged into the spacechain)
If we didn't have OP_RETURN, you could instead pay to a pubkey that's
constructed from a NUMS point / or a pedersen commitment, that's (roughly)
guaranteed unspendable, at least until secp256k1 is broken via bitcoin's
consensus rules (with the obvious disadvantage that nodes then can't
remove these outputs from the utxo set).
That was also used for XCP/Counterparty's ICO in 2014, at about 823 uBTC
per XCP on average (depending on when you got in it was between 666
uBTC/XCP and 1000 uBTC/XCP apparently), falling to a current price of
about 208 uBTC per XCP. It was about 1000 uBTC/XCP until mid 2018 though.
https://counterparty.io/news/why-proof-of-burn/
https://github.com/CounterpartyXCP/Documentation/blob/master/Basics/FAQ-XCP.md
These seem like they might be bad things for people to actually do
(why would you want to be paid to mine a spacechain in coins that can
only fall in value relative to bitcoin?), and certainly I don't think
we should do things just to make this easier; but it seems more like a
"here's why you're hurting yourself if you do this" thing, rather than a
"we can prevent you from doing it and we will" thing.
Cheers,
aj
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-11 0:55 ` [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was " David A. Harding
2022-02-11 3:42 ` Jeremy Rubin
2022-02-11 17:42 ` James O'Beirne
@ 2022-02-18 7:34 ` ZmnSCPxj
2022-02-23 11:28 ` ZmnSCPxj
3 siblings, 0 replies; 34+ messages in thread
From: ZmnSCPxj @ 2022-02-18 7:34 UTC (permalink / raw)
To: David A. Harding, Bitcoin Protocol Discussion
Good morning Dave,
> On Mon, Feb 07, 2022 at 08:34:30PM -0800, Jeremy Rubin via bitcoin-dev wrote:
>
> > Whether [recursive covenants] is an issue or not precluding this sort
> > of design or not, I defer to others.
>
> For reference, I believe the last time the merits of allowing recursive
> covenants was discussed at length on this list[1], not a single person
> replied to say that they were opposed to the idea.
>
> I would like to suggest that anyone opposed to recursive covenants speak
> for themselves (if any intelligent such people exist). Citing the risk
> of recursive covenants without presenting a credible argument for the
> source of that risk feels to me like (at best) stop energy[2] and (at
> worst) FUD.
Let me try to give that a shot.
(Just to be clear, I am not an artificial intelligence, thus, I am not an "intelligent such people".)
The objection here is that recursion can admit partial (i.e. Turing-complete) computation.
Turing-completeness implies that the halting problem cannot be solved for arbitrary programs in the language.
Now, a counter-argument to that is that rather than using arbitrary programs, we should just construct programs from provably-terminating components.
Thus, even though the language may admit arbitrary programs that cannot provably terminate, "wise" people will just focus on using that subset of the language, and programming styles within the language, which have proofs of termination.
Or in other words: people can just avoid accepting coin that is encumbered with a SCRIPT that is not trivially shown to be non-recursive.
The counter-counter-argument is that it leaves such validation to the user, and we should really create automation (i.e. lower-level non-sentient programs) to perform that validation on behalf of the user.
***OR*** we could just design our language so that such things are outright rejected by the language as a semantic error, of the same type as `for (int x = 0; x = y; x++);` is a semantic error that most modern C compilers will reject if given `-Wall -Werror`.
Yes, we want users to have freedom to shoot themselves in the feet, but we also want, when it is our turn to be the users, to keep walking with two feet as long as we can.
And yes, you could instead build a *separate* tool that checks if your SCRIPT can be proven to be non-recursive, and let the recursive construct remain in the interpreter and just require users who don't want their feet shot to use the separate tool.
That is certainly a valid alternate approach.
It is certainly valid to argue as well, that if a possibly-recursive construct is used, and you cannot find a proof-of-non-recursion, you should avoid coins encumbered with that SCRIPT (which is just a heuristic that approximate a tool for proof-of-non-recursion).
On the other hand, if we have the ability to identify SCRIPTs that have some proof-of-non-recursion, why is such a tool not built into the interpreter itself (in the form of operations that are provably non-recursive), why have a separate tool that people might be too lazy to actually use?
Regards,
ZmnSCPxj
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-11 0:55 ` [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was " David A. Harding
` (2 preceding siblings ...)
2022-02-18 7:34 ` ZmnSCPxj
@ 2022-02-23 11:28 ` ZmnSCPxj
2022-02-23 18:14 ` Paul Sztorc
2022-02-24 6:53 ` Anthony Towns
3 siblings, 2 replies; 34+ messages in thread
From: ZmnSCPxj @ 2022-02-23 11:28 UTC (permalink / raw)
To: David A. Harding, Bitcoin Protocol Discussion
Subject: Turing-Completeness, And Its Enablement Of Drivechains
Introduction
============
Recently, David Harding challenged those opposed to recursive covenants
for *actual*, *concrete* reasons why recursive covenants are a Bad Thing
(TM).
Generally, it is accepted that recursive covenants, together with the
ability to update loop variables, is sufficiently powerful to be
considered Turing-complete.
So, the question is: why is Turing-completness bad, if it requires
*multiple* transactions in order to implement Turing-completeness?
Surely the practical matter that fees must be paid for each transaction
serves as a backstop against Turing-completeness?
i.e. Fees end up being the "maximum number of steps", which prevents a
language from becoming truly Turing-complete.
I point out here that Drivechains is implementable on a Turing-complete
language.
And we have already rejected Drivechains, for the following reason:
1. Sidechain validators and mainchain miners have a strong incentive to
merge their businesses.
2. Mainchain miners end up validating and commiting to sidechain blocks.
3. Ergo, sidechains on Drivechains become a block size increase.
Also:
1. The sidechain-to-mainchain peg degrades the security of sidechain
users from consensus "everyone must agree to the rules" to democracy
"if enough enfranchised voters say so, they can beat you up and steal
your money".
In this write-up, I will demonstrate how recursive covenants, with
loop variable update, is sufficient to implement a form Drivechains.
Logically, if the construct is general enough to form Drivechains, and
we rejected Drivechains, we should also reject the general construct.
Digression: `OP_TLUV` And `OP_CAT` Implement Recursive Covenants
================================================================
Let me now do some delaying tactics and demonstrate how `OP_TLUV` and
`OP_CAT` allow building recursive covenants by quining.
`OP_TLUV` has a mode where the current Tapleaf is replaced, and the
new address is synthesized.
Then, an output of the transaction is validated to check that it has
the newly-synthesized address.
Let me sketch how a simple recursive covenant can be built.
First, we split the covenant into three parts:
1. A hash.
2. A piece of script which validates that the first witness item
hashes to the above given hash in part #1, and then pushes that
item into the alt stack.
3. A piece of script which takes the item from the alt stack,
hashes it, then concatenates a `OP_PUSH` of the hash to that
item, then does a replace-mode `OP_TLUV`.
Parts 1 and 2 must directly follow each other, but other SCRIPT
logic can be put in between parts 2 and 3.
Part 3 can even occur multiple times, in various `OP_IF` branches.
In order to actually recurse, the top item in the witness stack must
be the covenant script, *minus* the hash.
This is supposed to be the quining argument.
The convenant script part #2 then checks that the quining argument
matches the hash that is hardcoded into the SCRIPT.
This hash is the hash of the *rest* of the SCRIPT.
If the quining argument matches, then it *is* the SCRIPT minus its
hash, and we know that we can use that to recreate the original SCRIPT.
It then pushes them out of the way into the alt stack.
Part #3 then recovers the original SCRIPT from the alt stack, and
resynthesizes the original SCRIPT.
The `OP_TLUV` is then able to resynthesize the original address.
Updating Loop Variables
-----------------------
But repeating the same SCRIPT over and over is boring.
What is much more interesting is to be able to *change* the SCRIPT
on each iteration, such that certain values on the SCRIPT can be
changed.
Suppose our SCRIPT has a loop variable `i` that we want to change
each time we execute our SCRIPT.
We can simply put this loop variable after part 1 and before part 2.
Then part 2 is modified to first push this loop variable onto the
alt stack.
The SCRIPT that gets checked is always starts from part 2.
Thus, the SCRIPT, minus the loop variable, is always constant.
The SCRIPT can then access the loop variable from the alt stack.
Part 2 can be extended so that the loop variable is on top of the
quined SCRIPT on the alt stack.
This lets the SCRIPT easily access the loop variable.
The SCRIPT can also update the loop variable by replacing the top
of the alt stack with a different item.
Then part 3 first pops the alt stack top (the loop variable),
concatenates it with an appropriate push, then performs the
hash-then-concatenate dance.
This results in a SCRIPT that is the same as the original SCRIPT,
but with the loop variable possibly changed.
The SCRIPT can use multiple loop variables; it is simply a question
of how hard it would be to access from the alt stack.
Drivechains Over Recursive Covenants
====================================
Drivechains can be split into four parts:
1. A way to commit to the sidechain blocks.
2. A way to move funds from mainchain to sidechain.
3. A way to store sidechain funds.
4. A way to move funds from sidechain to mainchain.
The first three can be easily implemented by a recursive covenant
without a loop variable, together with an opcode to impose some
restriction on amounts, such as `OP_IN_OUT_AMOUNT`.
The technique we would use would be to put the entire sidechain
funds into a single UTXO, protected by a recursive covenant.
The recursive covenant ensures that it can store the sidechain
funds.
This covers part 3.
The recursive covenant could, with the help of `OP_CAT` and
`OP_CTV`, check that every transaction spending the UTXO has a
second output that is an `OP_RETURN` with a commitment to the
sidechain block.
We can ensure that only one such transaction exists in each
mainchain block by adding a `<1> OP_CSV`, ensuring that only one
sidechain-commitment transaction can occur on each mainchain
block.
This covers part 1.
Mainchain-to-sidechain pegs require the cooperation of a
sidechain validator.
The sidechain validator creates a block that instantiates the
peg-in on the sidechain, then creates a transaction that commits
to that sidechain block including the peg-in, and spending the
current sidechain UTXO *and* the mainchain funds being transferred
in.
Then the entity requesting the peg-in checks the sidechain block
and the commitment on the transaction, then signs the transaction.
The value restriction on the recursive covenant should then be to
allow the output to be equal, or larger, than the input.
This covers part 2.
The recursive sidechain covenant by itself has a constant SCRIPT,
and thus has a constant address.
The last part of Drivechains -- sidechain-to-mainchain peg ---
is significantly more interesting.
Digression: Hashes As Peano Naturals
------------------------------------
It is possible to represent natural numbers using the following
Haskell data type:
```Haskell
data Nat = Z
| S Nat
-- Z :: Nat
-- S :: Nat -> Nat
```
We can represent naturals as:
* `0` == `Z`
* `1` == `S Z`
* `2` == `S (S Z)`
* `3` == `S (S (S Z))`
* etc.
How do we translate this into Bitcoin SCRIPT?
* `Z` == Any arbitrary 160-bit number.
* `S` == `OP_HASH160`.
Thus:
* `0` == `Z`
* `1` == `hash160(Z)`
* `2` == `hash160(hash160(Z))`
* `3` == `hash160(hash160(hash160(Z)))`
* etc.
In particular:
* We can increment a number by simply doing `OP_HASH160`.
* We can decrement a number by having the supposed
decrementation be supplied on the witness stack, then
validating that it is indeed the next lower number by
hashing the witness item and comparing it to the number
we have.
Note also that ***we do not need `OP_ADD` or `OP_SUB` for
this***, though that would actually make it simpler.
(But yeah, the whole point is that *BITCOIN IS A LOT MORE
POWERFUL THAN YOU EXPECT*.)
This is relevant to us due to how sidechain-to-mainchain
pegs are implemented.
Drivechain Peg-Out
------------------
In Drivechains, first somebody proposes to withdraw some
amount of funds from the sidechain to a mainchain address.
Then mainchain miners enter a voting period, during
which they either agree to the withdrawal, or disagree.
We can use the above schema to keep track of a running
total number of votes.
We define some numbers:
* `Z` == `0`
* `P` == some maximum time period.
We then encode `Z`, `P / 2`, and `P` using the hashed-Peano
encoding in the previous subsection.
In order to allow withdrawals, we have an alternate branch,
such as a different Tapleaf, for a withdrawal SCRIPT.
This only requires that the first output has the same address
as itself (i.e. the sidechain covenant), and the second output
has a new recursive covenant, the peg-out covenant.
The peg-out covenant has three loop variables:
* `v`, initialized to `Z`.
* This is the "validity level" of the peg-out.
* Voters who want to vote "for validity" would *increment*
this count.
* Voters who want to vote "against validity" would
*do nothing*.
* `t`, initialized to `Z`.
* This is the voting time period.
* Each time the peg-out covenant is used, this loop
variable is incremented.
* Once it reaches `P`, voting ends and the voting
branches of the peg-out covenant are disabled,
* `a`, initialized to the peg-out address.
* This is not actually changed in the covenant, but
it is useful to keep it in the loop variable storage
area.
* With `OP_CTV` this can be an address that commits to
any number of promised outputs.
The peg-out covenant has these branches:
* If `v` equals `P / 2`, then the UTXO can be spent to the
address `a`.
This is synthesized with an `OP_CTV` and `OP_CAT`.
* If `t` equals `P`, then the UTXO can only be spent
by being pegged into the sidechain covenant.
If this branch is not entered, we increment `t`.
* This implies an inter-recursion between the sidechain
covenant and the peg-out covenant.
* Check if the witness stack top is true or not:
* If true, increment `v` and recurse ("vote-for" branch).
* Else just recurse ("vote-against" branch).
### Fixing Inter-recursion
We can observe that the inter-recursion between the sidechain
covenant and the peg-out covenant is problematic:
* `OP_CTV` requires that the hash of the output covenant is
known.
* `OP_TLUV` will only replace the same output index as the
input index it is on.
This prevents the inter-recursion between the sidechain
covenant and the peg-out covenant.
To fix this, we can observe that we can translate any set
of inter-recursive functions, such as this:
```Haskell
foo :: FooArg -> Result
foo fa = bar (fooCode fa)
bar :: BarArg -> Result
bar ba = foo (barCode ba)
```
...into a single self-recursive function:
```Haskell
fooBar :: Either FooArg BarArg -> Result
fooBar a = case a of
Left fa -> fooBar (Right (fooCode fa))
Right ba -> fooBar (Left (barCode ba))
```
Similarly, we can instead convert the inter-recursive
sidechain and peg-out covenants into a single
self-recursive covenant.
This single covenant would have the same set of loop
variables `v`, `t`, and `a` as the peg-out covenant
described above.
This time, `a` is not an address, but an entire output
(i.e. `scriptPubKey` and `amount`).
By default, `v`, `t`, and `a` are a number `0`.
If so, then there is no pending peg-out being voted on.
If there is no pending peg-out, then either we just
commit to a sidechain block, or we commit to a sidechain
block *and* start a new peg-out by filling in `a`, and
initializing `v` and `t` to `Z`.
If there is a pending peg-out, then either we just commit
to a sidechain block (and implicitly downvote the pending
peg-out) or commit to a sidechain block *and* indicate an
upvote of the pending peg-out.
If `v` has reached the limit then we require, using
`OP_CTV`, that `a` appear on the second output, and that
the same SCRIPT (with `v`, `t`, and `a` reseet to `0`)
is on the first output, and do not impose any minimum
value for the first output, and the sidechain commitment
is now an `OP_RETURN` on the third output, and no other
outputs.
If `t` has reached the limit, then we require simply that
the `v`, `t`, and `a` are reset to 0 and the sidechain
commitment.
With the above, all components of Drivechain are implementable
with:
* `OP_TLUV`
* `OP_CAT`
* `OP_CTV`
* `OP_IN_OUT_AMOUNT` of some kind, including the ability to
check the output amount is larger than the input amount
(e.g. by `OP_EQUAL` or `OP_GREATER`).
* Existing Bitcoin SCRIPT (`OP_ADD` **not** needed!).
Conclusion
==========
PH34R THE RECURSIVE COVENANT!
PH34R!!!!!!!
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-23 11:28 ` ZmnSCPxj
@ 2022-02-23 18:14 ` Paul Sztorc
2022-02-24 2:20 ` ZmnSCPxj
2022-02-24 6:53 ` Anthony Towns
1 sibling, 1 reply; 34+ messages in thread
From: Paul Sztorc @ 2022-02-23 18:14 UTC (permalink / raw)
To: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 6321 bytes --]
On 2/23/2022 6:28 AM, ZmnSCPxj via bitcoin-dev wrote:
> ... Drivechains is implementable on a Turing-complete
> language.
> And we have already rejected Drivechains, for the following reason:
>
> 1. Sidechain validators and mainchain miners have a strong incentive to
> merge their businesses.
> 2. Mainchain miners end up validating and commiting to sidechain blocks.
> 3. Ergo, sidechains on Drivechains become a block size increase.
Is this indeed the reason? Because it is not a good one.
First, (as always) we must ignore BIP 301*. (Since it was invented to cancel point 1. Which it does -- by giving an incentive for side-validators and main-miners to UN-merge their businesses.)
With that out of the way, let's swap "blocksize increase" for "mining via natural gas flaring" :
1. Oil drillers and mainchain miners have a strong incentive** to merge their businesses.
2. Mainchain miners end up drilling for oil.
3. Ergo, sidechains on Drivechains become a requirement, that full nodes mine for oil.
The above logic is flawed, because full nodes can ignore the mining process. Nodes outrank miners.
Merged mining is, in principle, no different from any other source of mining profitability. I believe there is an irrational prejudice against merged mining, because MM takes the form of software. It would be like an NFL referee who refuses to allow their child to play an NFL videogame, on the grounds that the reffing in the game is different from how the parent would ref. But that makes no difference to anything. The only relevant issue is if the child has fun playing the videogame.
(And of course, merged mining long predates drivechain, and miners are MMing now, and have been for years. It was Satoshi who co-invented merged mining, so the modern prejudice against it is all the more mysterious.)
> Also:
>
> 1. The sidechain-to-mainchain peg degrades the security of sidechain
> users from consensus "everyone must agree to the rules" to democracy
> "if enough enfranchised voters say so, they can beat you up and steal
> your money".
>
> In this write-up, I will...
This is also a mischaracterization.
Drivechain will not work if 51% hashrate is attacking the network. But that is the case for everything, including the Lightning Network***.
So there is no sense in which the security is "degraded". To establish that, one would need arguments about what will probably happen and why. Which is exactly what my original Nov 2015 article contains: truthcoin.info/blog/drivechain/#drivechains-security , as does my Peer Review section :https://www.drivechain.info/peer-review/peer-review-new/
(And, today Largeblocker-types do not have any "everyone must agree to the rules" consensus, at all. Anyone who wants to use a sidechain-feature today, must obtain it via Altcoin or via real-world trust. So the current security is "nothing" and so it is hard to see how that could be "degraded".)
--
I am not sure it is a good use of my time to talk to this list about Drivechain. My Nov 2015 article anticipated all of the relevant misunderstandings. Almost nothing has changed since then.
As far as I am concerned, Drivechain was simply ahead of its time. Eventually, one or more of the following --the problem of Altcoins, the human desire for freedom and creativity, the meta-consensus/upgrade/ossification problem, the problem of persistently low security budget, and/or the expressiveness of Bitcoin smart contracts-- will force Bitcoiners to relearn drivechain-lore and eventually adopt something drivechain-like. At which point I will write to historians to demand credit. That is my plan so far, at least.
--
As to the actual content of your post, it seems pro-Drivechain.
After all, you are saying that Recursive Covenants --> Turing Completeness --> Drivechain. So, which would you rather have? The hacky, bizzaro, covenant-Drivechain, or my pure optimized transparent Bip300-Drivechain? Seems that this is exactly what I predicted: people eventually reinventing Drivechain.
On this topic, in 2015-2016 I wrote a few papers and gave a few recorded talks****, in which I compared the uncontrollable destructive chaos of Turing Completeness, to a "categorical" Turing Completeness where contracts are sorted by category (ie, all of the BitName contracts in the Namecoin-sidechain, all of the oracle contracts in the oracle sidechain, etc). The categorical strategy allows, paradoxically (and perhaps counterintuitively), for more expressive contracts, since you can prevent smart contracts from attacking each other. (They must have a category, so if they aren't Name-contracts they cannot live in the Namecoin-sidechain -- they ultimately must live in an "Evil Sidechain", which the miners have motive and opportunity to simply disable.) If people are now talking about how Turing Completeness can lead to smart contracts attacking each other, then I suppose I was years ahead-of-my-time with that, as well. Incidentally, my conclusion was that this problem is BEST solved by allowing miners to censor contract-categories (aka censor sidechain-categories, aka 'beat people up' as you put it), which is how I invented drivechain in the first place.
*Shrug*,
Paul
*A small table which explains how this works:https://github.com/bitcoin/bips/blob/master/bip-0301.mediawiki#notation-and-example
**Doubtless many of you have heard of this new trend: oil drillers encounter unwanted natural gas, in areas where there are no natural gas customers. Instead of waste this gas, they have begun selling it to miners.https://economictimes.indiatimes.com/news/international/business/oil-drillers-and-bitcoin-miners-bond-over-natural-gas/articleshow/82828878.cms .
***As is well known, it is easy for 51% hashrate to double-spend in the LN, by censoring 'justice transactions'. Moreover, miners seem likely to evade retribution if they do this, as they can restrain the scale, timing, victims, circumstances etc of the attack.
****https://www.youtube.com/watch?v=xGu0o8HH10U&list=PLw8-6ARlyVciMH79ZyLOpImsMug3LgNc4&index=1
https://www.truthcoin.info/blog/contracts-oracles-sidechains/
https://www.truthcoin.info/blog/drivechain-op-code/
https://www.truthcoin.info/blog/wise-contracts/
[-- Attachment #2: Type: text/html, Size: 7776 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-23 18:14 ` Paul Sztorc
@ 2022-02-24 2:20 ` ZmnSCPxj
0 siblings, 0 replies; 34+ messages in thread
From: ZmnSCPxj @ 2022-02-24 2:20 UTC (permalink / raw)
To: Paul Sztorc, Bitcoin Protocol Discussion
Good morning Paul, welcome back, and the list,
For the most part I am reluctant to add Turing-completeness due to the Principle of Least Power.
We saw this play out on the web browser technology.
A full Turing-complete language was included fairly early in a popular HTML implementation, which everyone else then copied.
In the beginning, it had very loose boundaries, and protections against things like cross-site scripting did not exist.
Eventually, W3C cracked down and modern JavaScript is now a lot more sandboxed than at the beginning --- restricting its power.
In addition, for things like "change the color of this bit when the mouse hovers it", which used to be implemented in JavaScript, were moved to CSS, a non-Turing-complete language.
The Principle of Least Power is that we should strive to use the language with *only what we need*, and naught else.
So I think for the most part that Turing-completeness is dangerous.
There may be things, other than Drivechain, that you might object to enabling in Bitcoin, and if those things can be implemented in a Turing-complete language, then they are likely implementable in recursive covenants.
That the web *started* with a powerful language that was later restricted is fine for the web.
After all, the main use of the web is showing videos of attractive female humans, and cute cats.
(WARNING: WHEN I TAKE OVER THE WORLD, I WILL TILE IT WITH CUTE CAT PICTURES.)
(Note: I am not an AI that seeks to take over the world.)
But Bitcoin protects money, which I think is more important, as it can be traded not only for videos of attractive female humans, and cute cats, but other, lesser things as well.
So I believe some reticence towards recursive covenants, and other things it may enable, is warranted,
Principle of Least Power exists, though admittedly, this principle was developed for the web.
The web is a server-client protocol, but Bitcoin is peer-to-peer, so it seems certainly possible that Principle of Least Power does not apply to Bitcoin.
As I understand it, however, the Principle of Least Power exists *precisely* because increased power often lets third parties do more than what was expected, including things that might damage the interests of the people who allowed the increased power to exist, or things that might damage the interests of *everyone*.
One can point out as well, that despite the problems that JavaScript introduced, it also introduced GMail and the now-rich Web ecosystem.
Perhaps one might liken recursive covenants to the box that was opened by Pandora.
Once opened, what is released cannot be put back.
Yet perhaps at the bottom of this box, is Hope?
Also: Go not to the elves for counsel, for they will say both no and yes.
Regards,
ZmnSCPxj
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-23 11:28 ` ZmnSCPxj
2022-02-23 18:14 ` Paul Sztorc
@ 2022-02-24 6:53 ` Anthony Towns
2022-02-24 12:03 ` ZmnSCPxj
1 sibling, 1 reply; 34+ messages in thread
From: Anthony Towns @ 2022-02-24 6:53 UTC (permalink / raw)
To: ZmnSCPxj, Bitcoin Protocol Discussion
On Wed, Feb 23, 2022 at 11:28:36AM +0000, ZmnSCPxj via bitcoin-dev wrote:
> Subject: Turing-Completeness, And Its Enablement Of Drivechains
> And we have already rejected Drivechains,
That seems overly strong to me.
> for the following reason:
> 1. Sidechain validators and mainchain miners have a strong incentive to
> merge their businesses.
> 2. Mainchain miners end up validating and commiting to sidechain blocks.
> 3. Ergo, sidechains on Drivechains become a block size increase.
I think there are two possible claims about drivechains that would make
them unattractive, if true:
1) that adding a drivechain is a "block size increase" in the sense
that every full node and every miner need to do more work when
validating a block, in order to be sure whether the majority of hash
rate will consider it valid, or will reject it and refuse to build
on it because it's invalid because of some external drivechain rule
2) that funds deposited in drivechains will be stolen because
the majority of hashrate is not enforcing drivechain rules (or that
deposited funds cannot be withdrawn, but will instead be stuck in
the drivechain, rather than having a legitimate two-way peg)
And you could combine those claims, saying that one or the other will
happen (depending on whether more or less than 50% of hashpower is
enforcing drivechain rules), and either is bad, even though you don't
know which will happen.
I believe drivechain advocates argue a third outcome is possible where
neither of those claims hold true, where only a minority of hashrates
needs to validate the drivechain rules, but that is still sufficient
to prevent drivechain funds from being stolen.
One way to "reject" drivechains is simply to embrace the second claim --
that putting money into drivechains isn't safe, and that miners *should*
claim coins that have been drivehcain encumbered (or that miners
should not assist with withdrawing funds, leaving them trapped in the
drivechain). In some sense this is already the case: bip300 rules aren't
enforced, so funds committed today via bip300 can likely expect to be
stolen, and likely won't receive the correct acks, so won't progress
even if they aren't stolen.
I think a key difference between tx-covenant based drivechains and bip300
drivechains is hashpower endorsement: if 50% of hashpower acks enforcement
of a new drivechain (as required in bip300 for a new drivechain to exist
at all), there's an implicit threat that any block proposing an incorrect
withdrawal from that blockchain will have their block considered invalid
and get reorged out -- either directly by that hashpower majority, or
indirectly by users conducting a UASF forcing the hashpower majority to
reject those blocks.
I think removing that implicit threat changes the game theory
substantially: rather than deposited funds being withdrawn due to the
drivechain rules, you'd instead expect them to be withdrawn according to
whoever's willing to offer the miners the most upfront fees to withdraw
the funds.
That seems to me to mean you'd frequently expect to end up in a scorched
earth scenario, where someone attempts to steal, then they and the
legitimate owner gets into a bidding war, with the result that most
of the funds end up going to miners in fees. Because of the upfront
payment vs delayed collection of withdrawn funds, maybe it could end up
as a dollar auction, with the two parties competing to lose the least,
but still both losing substantial amounts?
So I think covenant-based drivechains would be roughly the same as bip300
drivechains, where a majority of hashpower used software implementing
the following rules:
- always endorse any proposed drivechain
- always accept any payment into a drivechain
- accept bids to ack/nack withdrawals, then ack/nack depending on
whoever pays the most
You could probably make covenant-based drivechains a closer match to
bip300 drivechains if a script could determine if an input was from a
(100-block prior) coinbase or not.
> Logically, if the construct is general enough to form Drivechains, and
> we rejected Drivechains, we should also reject the general construct.
Not providing X because it can only be used for E, may generalise to not
providing Y which can also only be used for E, but it doesn't necessarily
generalise to not providing Z which can be used for both G and E.
I think it's pretty reasonable to say:
a) adding dedicated consensus features for drivechains is a bad idea
in the absence of widespread consensus that drivechains are likely
to work as designed and be a benefit to bitcoin overall
b) if you want to risk your own funds by leaving your coins on an
exchange or using lightning or eltoo or tumbling/coinjoin or payment
pools or drivechains or being #reckless in some other way, and aren't
asking for consensus changes, that's your business
Cheers,
aj
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-24 6:53 ` Anthony Towns
@ 2022-02-24 12:03 ` ZmnSCPxj
2022-02-26 5:38 ` Billy Tetrud
2022-02-26 6:00 ` Anthony Towns
0 siblings, 2 replies; 34+ messages in thread
From: ZmnSCPxj @ 2022-02-24 12:03 UTC (permalink / raw)
To: Anthony Towns; +Cc: Bitcoin Protocol Discussion
Good morning aj,
> > Logically, if the construct is general enough to form Drivechains, and
> > we rejected Drivechains, we should also reject the general construct.
>
> Not providing X because it can only be used for E, may generalise to not
> providing Y which can also only be used for E, but it doesn't necessarily
> generalise to not providing Z which can be used for both G and E.
Does this not work only if the original objection to merging in BIP-300 was of the form:
* X implements E.
* Z implements G and E.
* Therefore, we should not merge in X and instead should merge in the more general construct Z.
?
Where:
* E = Drivechains
* X = BIP-300
* Z = some general computation facility
* G = some feature.
But my understanding is that most of the NACKs on the BIP-300 were of the form:
* X implements E.
* E is bad.
* Therefore, we should not merge in X.
If the above statement "E is bad" holds, then:
* Z implements G and E.
* Therefore, we should not merge in Z.
Where Z = something that implements recursive covenants.
I think we really need someone who NACKed BIP-300 to speak up.
If my understanding is correct and that the original objection was "Drivechains are bad for reasons R[0], R[1]...", then:
* You can have either of these two positions:
* R[0], R[1] ... are specious arguments and Drivechains are not bad, therefore we can merge in a feature that enables Recursive Covenants -> Turing-Completeness -> Drivechains.
* Even if you NACKed before, you *are* allowed to change your mind and move to this position.
* R[0], R[1] ... are valid arguments are Drivechains are bad, therefore we should **NOT** merge in a feature that implements Recursive Covenants -> Turing-Completeness -> Drivechains.
You cannot have it both ways.
Admittedly, there may be some set of restrictions that prevent Turing-Completeness from implementing Drivechains, but you have to demonstrate a proof of that set of restrictions existing.
> I think it's pretty reasonable to say:
>
> a) adding dedicated consensus features for drivechains is a bad idea
> in the absence of widespread consensus that drivechains are likely
> to work as designed and be a benefit to bitcoin overall
>
> b) if you want to risk your own funds by leaving your coins on an
> exchange or using lightning or eltoo or tumbling/coinjoin or payment
> pools or drivechains or being #reckless in some other way, and aren't
> asking for consensus changes, that's your business
*Shrug* I do not really see the distinction here --- in a world with Drivechains, you are free to not put your coins in a Drivechain-backed sidechain, too.
(Admittedly, Drivechains does get into a Mutually Assured Destruction argument, so that may not hold.
But if Drivechains going into a MAD argument is an objection, then I do not see why covenant-based Drivechains would also not get into the same MAD argument --- and if you want to avoid the MADness, you cannot support recursive covenants, either.
Remember, 51% attackers can always censor the blockchain, regardless of whether you put the Drivechain commitments into the coinbase, or in an ostensibly-paid-by-somebody-else transaction.)
Regards,
ZmnSCPxj
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-24 12:03 ` ZmnSCPxj
@ 2022-02-26 5:38 ` Billy Tetrud
2022-02-26 6:43 ` ZmnSCPxj
2022-02-26 6:00 ` Anthony Towns
1 sibling, 1 reply; 34+ messages in thread
From: Billy Tetrud @ 2022-02-26 5:38 UTC (permalink / raw)
To: ZmnSCPxj, Bitcoin Protocol Discussion; +Cc: Anthony Towns
[-- Attachment #1: Type: text/plain, Size: 6756 bytes --]
@ZmnSCPxj
> we have already rejected Drivechains
I also think this is kind of dubious. I don't remember consensus being to
"reject" drivechains, as much as consensus was that it wasn't a priority
and there wasn't a lot of interest in doing on it from many people (I'm
sure Paul could comment further on that).
> sidechains on Drivechains become a block size increase.
While this would be true for those who opt into a particular drivechain, I
think its important to note that it would *not* be identical to a
main-chain block size increase in a very important way: normal bitcoin
miners and nodes nodes that don't care about drivechains would not see a
blocksize increase.
But even in the hypothetical scenario where *all* mainchain miners expand
their business to sidechains, it still does not negatively affect normal
bitcoin nodes that don't care about drivechains. The important things
<https://github.com/fresheneesz/bitcoinThroughputAnalysis> about a "normal"
blocksize increase are:
A. It increases the machine resources necessary for IBD, transaction relay,
and validation
B. It probably increases the growth rate of the UTXO set, increasing memory
necessary to store that.
C. It increases the cost of storing the blockchain on non-pruned nodes
D. It increases the average propagation time of new blocks, which increases
miner centralization pressure.
The existence of drivechains with every miner opted into (some of) them
would only negatively impact item D. Normal bitcoin nodes wouldn't need to
use any extra resources if they don't care about drivechains. And miners
would only have additional centralization pressure proportional to what
drivechains they're opted into. The reason for that is that if a miner is
opted into drivechain X, and propagation of transaction data for
drivechain X is significantly slower than the normal bitcoin network, a
miner may not have the latest drivechain X block to merge mine on top of.
However that miner can still mine bitcoin with no additional latency, and
so that centralization pressure is minimal unless a significant fraction of
the miner's revenue comes from drivechains with slow data propagation.
Beyond that, by my calculations, miner centralization is quite far from
being significantly affected by blocksize increases. So unless drivechains
become the dominant use case of the bitcoin blockchain, this really isn't
something that I expect to cause any substantial miner centralization or
other blocksize related problems.
ZmnSCPaj, are you arguing that drivechains are bad for bitcoin or are you
arguing that it would be unwise to opt into a drivechain? Those are very
different arguments. If drivechains compromised things for normal bitcoin
nodes that ignore drivechains, then I agree that would be serious reason to
reject drivechains outright and reject things that allow it to happen.
However, if all you're saying is that people can shoot themselves in the
foot with drivechains, then avoiding drivechains should not be a
significant design consideration for bitcoin but rather for those who might
consider spending their time working on drivechains.
On Thu, Feb 24, 2022 at 6:03 AM ZmnSCPxj via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> Good morning aj,
>
> > > Logically, if the construct is general enough to form Drivechains, and
> > > we rejected Drivechains, we should also reject the general construct.
> >
> > Not providing X because it can only be used for E, may generalise to not
> > providing Y which can also only be used for E, but it doesn't necessarily
> > generalise to not providing Z which can be used for both G and E.
>
> Does this not work only if the original objection to merging in BIP-300
> was of the form:
>
> * X implements E.
> * Z implements G and E.
> * Therefore, we should not merge in X and instead should merge in the more
> general construct Z.
>
> ?
>
> Where:
>
> * E = Drivechains
> * X = BIP-300
> * Z = some general computation facility
> * G = some feature.
>
> But my understanding is that most of the NACKs on the BIP-300 were of the
> form:
>
> * X implements E.
> * E is bad.
> * Therefore, we should not merge in X.
>
> If the above statement "E is bad" holds, then:
>
> * Z implements G and E.
> * Therefore, we should not merge in Z.
>
> Where Z = something that implements recursive covenants.
>
> I think we really need someone who NACKed BIP-300 to speak up.
> If my understanding is correct and that the original objection was
> "Drivechains are bad for reasons R[0], R[1]...", then:
>
> * You can have either of these two positions:
> * R[0], R[1] ... are specious arguments and Drivechains are not bad,
> therefore we can merge in a feature that enables Recursive Covenants ->
> Turing-Completeness -> Drivechains.
> * Even if you NACKed before, you *are* allowed to change your mind and
> move to this position.
> * R[0], R[1] ... are valid arguments are Drivechains are bad, therefore
> we should **NOT** merge in a feature that implements Recursive Covenants ->
> Turing-Completeness -> Drivechains.
>
> You cannot have it both ways.
> Admittedly, there may be some set of restrictions that prevent
> Turing-Completeness from implementing Drivechains, but you have to
> demonstrate a proof of that set of restrictions existing.
>
> > I think it's pretty reasonable to say:
> >
> > a) adding dedicated consensus features for drivechains is a bad idea
> > in the absence of widespread consensus that drivechains are likely
> > to work as designed and be a benefit to bitcoin overall
> >
> > b) if you want to risk your own funds by leaving your coins on an
> > exchange or using lightning or eltoo or tumbling/coinjoin or payment
> > pools or drivechains or being #reckless in some other way, and aren't
> > asking for consensus changes, that's your business
>
> *Shrug* I do not really see the distinction here --- in a world with
> Drivechains, you are free to not put your coins in a Drivechain-backed
> sidechain, too.
>
> (Admittedly, Drivechains does get into a Mutually Assured Destruction
> argument, so that may not hold.
> But if Drivechains going into a MAD argument is an objection, then I do
> not see why covenant-based Drivechains would also not get into the same MAD
> argument --- and if you want to avoid the MADness, you cannot support
> recursive covenants, either.
> Remember, 51% attackers can always censor the blockchain, regardless of
> whether you put the Drivechain commitments into the coinbase, or in an
> ostensibly-paid-by-somebody-else transaction.)
>
>
> Regards,
> ZmnSCPxj
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 7821 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-26 5:38 ` Billy Tetrud
@ 2022-02-26 6:43 ` ZmnSCPxj
2022-02-27 0:58 ` Paul Sztorc
0 siblings, 1 reply; 34+ messages in thread
From: ZmnSCPxj @ 2022-02-26 6:43 UTC (permalink / raw)
To: Billy Tetrud; +Cc: Bitcoin Protocol Discussion, Anthony Towns
Good morning AJ,
> ZmnSCPaj, are you arguing that drivechains are bad for bitcoin or are you arguing that it would be unwise to opt into a drivechain? Those are very different arguments. If drivechains compromised things for normal bitcoin nodes that ignore drivechains, then I agree that would be serious reason to reject drivechains outright and reject things that allow it to happen. However, if all you're saying is that people can shoot themselves in the foot with drivechains, then avoiding drivechains should not be a significant design consideration for bitcoin but rather for those who might consider spending their time working on drivechains.
Neither.
My argument is simply:
* If Drivechains are bad for whatever reason, we should not add recursive covenants.
* Otherwise, go ahead and add recursive covenants.
Drivechains are not a scaling solution [FOOTNOTE 1] and I personally am interested only in scaling solutions, adding more non-scaling-useable functionality is not of interest to me and I do not really care (but I would *prefer* if people focus on scaling-useable functionality, like `SIGHASH_NOINPUT`, `OP_EVICT`, `OP_CTV`, `OP_TLUV` probably without the self-replace capability).
I bring this up simply because I remembered those arguments against Drivechains, and as far as I could remember, those were the reasons for not adding Drivechains.
But if there is consensus that those arguments are bogus, then go ahead --- add Drivechains and/or recursive covenants.
I do not intend to utilize them any time soon anyway.
My second position is that in general I am wary of adding Turing-completeness, due precisely to Principle of Least Power.
A concern is that, since it turns out recursive covenants are sufficient to implement Drivechains, recursive covenants may also enable *other* techniques, currently unknown, which may have negative effects on Bitcoin, or which would be considered undesirable by a significant section of the userbase.
Of course, I know of no such technique, but given that a technique (Drivechains) which before would have required its own consensus change, turns out to be implementable inside recursive covenants, then I wonder if there are other things that would have required their own consensus change that are now *also* implementable purely in recursive covenants.
Of course, that is largely just stop energy, so if there is *now* consensus that Drivechains are not bad, go ahead, add recursive covenants (but please can we add `SIGHASH_NOINPUT` and `OP_CTV` first?).
Regards,
ZmnSCPxj
[FOOTNOTE 1] Sidechains are not a scaling solution, or at least, are beaten in raw scaling by Lightning. Blockchains are inefficient (THAT IS PRECISELY THE PROBLEM WHY YOU NEED A SCALING SOLUTION FOR BITCOIN THAT WAS LIKE THE FIRST RESPONSE TO SATOSHI ON THE CYPHERPUNK MAILING LIST) and you have to show your transaction to everyone. While sidechains imply that particular subsets are the only ones interested in particular transactions, compare how large a sidechain-participant-set would be expected to be, to how many people learn of a payment over the Lightning Network. If you want a sidechain to be as popular as LN, then you expect its participant set to be about as large as LN as well, and on a sidechain, a transaction is published to all sidechain participants, but on the LN, only a tiny tiny tiny fraction of the network is involved in any payment. Thus LN is a superior scaling solution. Now you might conter-argue that you can have multiple smaller sidechains and just use HTLCs to trade across them (i.e. microchains). I would then counter-counter-argue that bringing this to the most extreme conclusion, you would have tons of sidechains with only 2 participants each, and then you would pay by transferring across multiple participants in a chain of HTLCs and look, oh wow, surprise surprise, you just got the Lightning Network. LN wins.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-26 6:43 ` ZmnSCPxj
@ 2022-02-27 0:58 ` Paul Sztorc
2022-02-27 2:00 ` ZmnSCPxj
0 siblings, 1 reply; 34+ messages in thread
From: Paul Sztorc @ 2022-02-27 0:58 UTC (permalink / raw)
To: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 5511 bytes --]
On 2/26/2022 1:43 AM, ZmnSCPxj via bitcoin-dev wrote:
> ...
> Drivechains are not a scaling solution [FOOTNOTE 1] ...
> I personally am interested only in scaling solutions, adding more non-scaling-useable functionality is not of interest to me and I do not really care
> ...
> But if there is consensus that those arguments are bogus, then go ahead --- add Drivechains and/or recursive covenants.
> ...
>
> [FOOTNOTE 1] Sidechains are not a scaling solution ... Blockchains are inefficient ... and you have to show your transaction to everyone.
> ...
> Now you might conter-argue that you can have multiple smaller sidechains and just use HTLCs to trade across them ... I would then counter-counter-argue that bringing this to the most extreme conclusion, you would have tons of sidechains with only 2 participants each ...
Do you really hang your entire --"sidechains are not a scaling solution"-- argument on this frail logic?
The scaling strategy (in LN and DC) is the same: try NOT to "show your transaction to everyone". The details are of course different.
I think largeblock sidechains should be reconsidered:
* They are not a blocksize increase.
* They endorse the principle of scaling in layers.
* They allow users to be different. Some can pay more (for more decentralization), some less (for less decentralization).
(We are currently gambling the entire future of BTC, on the premise that strong decentralization will always be needed at all points in time.)
(This leaves us vulnerable to a strategy where our adversaries temporarily favor/promote centralized chains, so as to "domesticate" / control these in the future.)
* We can learn from past mistakes -- when a new largeblock sidechain is needed, we can make a new one from scratch, using everything we know.
* Different teams can compete, releasing different chains independently; thus curtailing "toxicity".
* All of the fees, paid on all blockchains, arrive as revenue to the same group of miners, thus improving total hashrate/difficulty.
* Sidechains will organize geographically, which may help security (ie, USA could spitefully run full nodes of the "China" largeblock sidechain).
* Relative to LN, users enjoy: unlimited "inbound liquidity", can receive money while offline, no risk that the channel will close, etc.
Certainly, sidechains are NOT for everyone. (Just as [I imagine] the LN is not for everyone.)
However, in 2015, many hardfork-largeblockers said: "we do not run a full node, full nodes are not important; we use SPV; read the whitepaper" etc.
They used SPV completely; and wanted large blocks. Presumably they would be happy users of a largeblock sidechain. So it would be >0 users.
Sadly, this idea is neglected, (I think) because of its unfortunate resemblance to naive-largeblock-ism. This is irrational.
***
You have emphasized the following relation: "you have to show your transaction to everyone" = "thing doesn't scale".
However, in LN, there is one transaction which you must, in fact, "show to everyone": your channel-opener.
Amusingly, in the largeblock sidechain, there is not. You can onboard using only the blockspace of the SC.
(One "rich guy" can first shift 100k coins Main-to-Side, and he can henceforth onboard many users over there. Those users can then onboard new users, forever.)
So it would seem to me, that you are on the ropes, even by your own criterion. [Footnote 1]
***
Perhaps, someone will invent a way, to LN-onboard WITHOUT needing new layer1 bytes.
If so, a "rich man" could open a LN channel, and gradually transfer it to new people.
Such a technique would need to meet two requirements (or, so it seems to me):
#1: The layer1 UTXO (that defines the channel) can never change (ie, the 32-bytes which define the p2sh/tapscript/covenant/whatever, must stay what-they-were when the channel was opened).
#2: The new part-owners (who are getting coins from the rich man), will have new pubkeys which are NOT known, until AFTER the channel is opened and confirmed on the blockchain.
Not sure how you would get both #1 and #2 at the same time. But I am not up to date on the latest LN research.
Paul
[Footnote 1]
I am certainly not a LN expert, so perhaps this analysis is misconceived. But consider these "best case scenario" assumptions for LN:
* Each new channel-open consumes just 32 vbytes (since they are all done via one or more "rich men" who batches all these into one block, 24/7/365)
* Each new channel-open, onboards 5 users at once who are a permanent trust group / channel factory / what-have-you
(these five newcomers must coordinate with each other and the "rich man", presumably via calendly link or whatever, for their one shot at getting on the blockchain).
* That one single channel is able to meet 100% of the user's payment needs
(it never has any problems, with liquidity /balancing /routing /uptime /hotwallet-crashing /counterparty-fees /etc)
(and also, people do NOT desire >1 channel for other reasons: their alt nyms, small business, church, etc)
* 99.9% of the 1MB (vB) blocksize is used for channel-opens (the spare 1000 vb = the coinbase + the single "rich man"-input)
* World population becomes a fixed 8.2 billion (and henceforth stops growing)
By simple envelop math, 6*24*365*(((1000000*.999)/32)*5) / 8.2 billion = ~exactly one year to onboard everyone.
But if the above assumptions contain, say, two orders of magnitude of "optimism", then it would instead take 100 years.
[-- Attachment #2: Type: text/html, Size: 5961 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-27 0:58 ` Paul Sztorc
@ 2022-02-27 2:00 ` ZmnSCPxj
2022-02-27 7:25 ` ZmnSCPxj
2022-02-28 0:20 ` Paul Sztorc
0 siblings, 2 replies; 34+ messages in thread
From: ZmnSCPxj @ 2022-02-27 2:00 UTC (permalink / raw)
To: Paul Sztorc, Bitcoin Protocol Discussion
Good morning Paul,
> ***
>
> You have emphasized the following relation: "you have to show your transaction to everyone" = "thing doesn't scale".
>
> However, in LN, there is one transaction which you must, in fact, "show to everyone": your channel-opener.
>
> Amusingly, in the largeblock sidechain, there is not. You can onboard using only the blockspace of the SC.
> (One "rich guy" can first shift 100k coins Main-to-Side, and he can henceforth onboard many users over there. Those users can then onboard new users, forever.)
>
> So it would seem to me, that you are on the ropes, even by your own criterion. [Footnote 1]
>
> ***
>
> Perhaps, someone will invent a way, to LN-onboard WITHOUT needing new layer1 bytes.
>
> If so, a "rich man" could open a LN channel, and gradually transfer it to new people.
>
> Such a technique would need to meet two requirements (or, so it seems to me):
> #1: The layer1 UTXO (that defines the channel) can never change (ie, the 32-bytes which define the p2sh/tapscript/covenant/whatever, must stay what-they-were when the channel was opened).
> #2: The new part-owners (who are getting coins from the rich man), will have new pubkeys which are NOT known, until AFTER the channel is opened and confirmed on the blockchain.
>
> Not sure how you would get both #1 and #2 at the same time. But I am not up to date on the latest LN research.
Yes, using channel factories.
A channel factory is a N-of-N where N >= 3, and which uses the same offchain technology to host multiple 2-of-2 channels.
We observe that, just as an offchain structure like a payment channel can host HTLCs, any offchain structure can host a lot of *other* contracts, because the offchain structure can always threaten to drop onchain to enforce any onchain-enforceable contract.
But an offchain structure is just another onchain contract!
Thus, an offchain structure can host many other offchain structures, and thus an N-of-N channel factory can host multiple 2-of-2 channel factories.
(I know we discussed sidechains-within-sidechains before, or at least I mentioned that to you in direct correspondence, this is basically that idea brought to its logical conclusion.)
Thus, while you still have to give *one* transaction to all Bitcoin users, that single transaction can back several channels, up to (N * (N - 1)) / 2.
It is not quite matching your description --- the pubkeys of the peer participants need to be fixed beforehand.
However, all it means is some additional pre-planning during setup with no scope for dynamic membership.
At least, you cannot dynamically change membership without onchain action.
You *can* change membership sets by publishing a one-input-one-output transaction onchain, but with Taproot, the new membership set is representable in a single 32-byte Taproot address onchain (admittedly, the transaction input is a txin and thus has overhead 32 bytes plus 1 byte for txout index, and you need 64 bytes signature for Taproot as well).
The advantage is that, in the meantime, if membership set is not changed, payments can occur *without* any data published on the blockchain (literally 0 data).
With sidechains, changing the ownership set requires that the sidechain produce a block.
That block requires a 32-byte commitment in the coinbase.
What is more, if *any* transfers occur on the sidechain, they cannot be real without a sidechain block, that has to be committed on the mainchain.
Thus, while changing the membership set of a channel factory is more expensive (it requires a pointer to the previous txout, a 64-byte Taproot signature, and a new Taproot address), continuous operation does not publish any data at all.
While in sidehchains, continuous operation and ordinary payments requires ideally one commitment of 32 bytes per mainchain block.
Continuous operation of the sidechain then implies a constant stream of 32-byte commitments, whereas continuous operation of a channel factory, in the absence of membership set changes, has 0 bytes per block being published.
We assume that onboarding new members is much rarer than existing members actually paying each other in an actual economy (after the first burst of onboarding, new members will only arise in proportion to the birth rate, but typical economic transactions occur much more often), so optimizing for the continuous operation seems a better tradeoff.
Channel factories have the nice properties:
* N-of-N means that nobody can steal from you.
* Even with a 51% miner, nobody can steal from you as long as none of the N participants is the 51% miner, see the other thread.
* Graceful degradation: even if if 1 of the N is offline, payments are done over the hosted 2-of-2s, and the balance of probability is that most of the 2-of-2s have both participants online and payments can continue to occur.
--
The reason why channel factories do not exist *yet* is that the main offchain construction we have, Poon-Dryja, is 2-of-2.
We have Decker-Wattenhofer, which supports N >= 2, but it needs to publish a lot of onchain data in case of dispute, and has lousy UX due to how it uses delays (you can only be safely offline for some small number of blocks, but you have to wait out a large multiple of that parameter).
We also have the newer Decker-Russell-Osuntokun ("eltoo"), but that requires `SIGHASH_NOINPUT`, which is now today called `SIGHASH_ANYPREVOUT`.
`OP_CTV` also is useful for publishing commitments-to-promised-outputs without having to publish outputs right now.
This is why I want to focus on getting both on Bitcoin first, *before* any recursive-contract-enabling technologies.
Admittedly, the recursive-covenant-enabling constructs look like they enable functionality equivalent to `SIGHASH_NOINPUT` and `OP_CTV`, though as I understand them, they would require more bytes than `SIGHASH_NOINPUT` or `OP_CTV`.
And scaling is really improved by reducing the number of bytes published, so there is value in merging in `SIGHASH_ANYPREVOUT` and `OP_CTV` at some point, so why not now.
Regards,
ZmnSCPxj
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-27 2:00 ` ZmnSCPxj
@ 2022-02-27 7:25 ` ZmnSCPxj
2022-02-27 16:59 ` Billy Tetrud
2022-02-28 0:20 ` Paul Sztorc
1 sibling, 1 reply; 34+ messages in thread
From: ZmnSCPxj @ 2022-02-27 7:25 UTC (permalink / raw)
To: ZmnSCPxj, Bitcoin Protocol Discussion
Good morning again Paul,
> With sidechains, changing the ownership set requires that the sidechain produce a block.
> That block requires a 32-byte commitment in the coinbase.
> What is more, if any transfers occur on the sidechain, they cannot be real without a sidechain block, that has to be committed on the mainchain.
The above holds if the mainchain miners also act as sidechain validators.
If they are somehow separate (i.e. blind merge mining), then the `OP_BRIBE` transaction needed is also another transaction.
Assuming the sidechain validator is using Taproot as well, it needs the 32+1 txin, a 64-byte signature, a 32-byte copy of the sidechain commitment that the miner is being bribed to put in the coinbase, and a txout for any change the sidechain validator has.
This is somewhat worse than the case for channel factories, even if you assume that every block, at least one channel factory has to do an onboarding event.
> Thus, while changing the membership set of a channel factory is more expensive (it requires a pointer to the previous txout, a 64-byte Taproot signature, and a new Taproot address), continuous operation does not publish any data at all.
> While in sidehchains, continuous operation and ordinary payments requires ideally one commitment of 32 bytes per mainchain block.
> Continuous operation of the sidechain then implies a constant stream of 32-byte commitments, whereas continuous operation of a channel factory, in the absence of membership set changes, has 0 bytes per block being published.
>
> We assume that onboarding new members is much rarer than existing members actually paying each other in an actual economy (after the first burst of onboarding, new members will only arise in proportion to the birth rate, but typical economic transactions occur much more often), so optimizing for the continuous operation seems a better tradeoff.
Perhaps more illustratively, with channel factories, different layers have different actions they can do, and the only one that needs to be broadcast widely are actions on the onchain layer:
* Onchain: onboarding / deboarding
* Channel Factory: channel topology change
* Channel: payments
This is in contrast with merge-mined Sidechains, where *all* activity requires a commitment on the mainchain:
* Onchain: onboarding / deboarding, payments
While it is true that all onboarding, deboarding, and payments are summarized in a single commitment, notice how in LN-with-channel-factories, all onboarding / deboarding is *also* summarized, but payments *have no onchain impact*, at all.
Without channel factories, LN is only:
* Onchain: onboarding / deboarding, channel topology change
* Channel: payments
So even without channel factories there is already a win, although again, due to the large numbers of channels we need, a channel factory in practice will be needed to get significantly better scaling.
Finally, in practice with Drivechains, starting a new sidechain requires implicit permission from the miners.
With LN, new channels and channel factories do not require any permission, as they are indistinguishable from ordinary transactions.
(the gossip system does leak that a particular UTXO is a particular published channel, but gossip triggers after deep confirmation, at which point it would be too late for miners to censor the channel opening.
The miners can censor channel closure for published channels, admittedly, but at least you can *start* a new channel without being censored, which you cannot do with Drivechain sidechains.)
Regards,
ZmnSCPxj
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-27 7:25 ` ZmnSCPxj
@ 2022-02-27 16:59 ` Billy Tetrud
2022-02-27 23:50 ` Paul Sztorc
0 siblings, 1 reply; 34+ messages in thread
From: Billy Tetrud @ 2022-02-27 16:59 UTC (permalink / raw)
To: ZmnSCPxj, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 5949 bytes --]
@Paul
> I think largeblock sidechains should be reconsidered:
> * They are not a blocksize increase.
This is short sighted. They would absolutely be a blocksize increase for
those following a large block sidechain. While sure, it wouldn't affect
bitcoin users who don't follow that sidechain, its misleading to call it
"not a blocksize increase" for everyone.
> * They allow users to be different. Some can pay more (for more decentralization), some less (for less decentralization).
> gambling the entire future of BTC, on the premise that strong decentralization will always be needed at all points in time.
Decentralization isn't just something where more is more valuable and
less is less valuable. Decentralization is either enough to stop a
class of attack or its not. Its pretty binary. If the decentralization
is not enough, it would be a pretty huge catastrophe for those
involved. Its pretty clear that making the blocksize eg 10 times
larger is a poor design choice. So advocating for such a thing on a
sidechain is just as bad as advocating for it on an altcoin.
Even if people only put a couple satoshis in such a sidechain at a
time, and don't feel the loss very much, the *world* would feel the
loss. Eg if everyone had $1 in such a system, and someone stole it
all, it would be a theft of billions of dollars. The fact that no
individual would feel much pain would make it not much less harmful to
society.
> We can learn from past mistakes -- when a new largeblock sidechain is needed, we can make a new one from scratch, using everything we know.
If there's some design principles that *allow* for safely increasing the
blocksize substantially like that, then I'd advocate for it in bitcoin. But
the goal of sidechains should not be "shoot from the hip and after everyone
on that sidechain gets burned we'll have learned valuable lessons". That's
not how engineering works. That's akin to wreckless human experimentation.
On Sun, Feb 27, 2022 at 1:25 AM ZmnSCPxj via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> Good morning again Paul,
>
> > With sidechains, changing the ownership set requires that the sidechain
> produce a block.
> > That block requires a 32-byte commitment in the coinbase.
> > What is more, if any transfers occur on the sidechain, they cannot be
> real without a sidechain block, that has to be committed on the mainchain.
>
> The above holds if the mainchain miners also act as sidechain validators.
> If they are somehow separate (i.e. blind merge mining), then the
> `OP_BRIBE` transaction needed is also another transaction.
> Assuming the sidechain validator is using Taproot as well, it needs the
> 32+1 txin, a 64-byte signature, a 32-byte copy of the sidechain commitment
> that the miner is being bribed to put in the coinbase, and a txout for any
> change the sidechain validator has.
>
> This is somewhat worse than the case for channel factories, even if you
> assume that every block, at least one channel factory has to do an
> onboarding event.
>
> > Thus, while changing the membership set of a channel factory is more
> expensive (it requires a pointer to the previous txout, a 64-byte Taproot
> signature, and a new Taproot address), continuous operation does not
> publish any data at all.
> > While in sidehchains, continuous operation and ordinary payments
> requires ideally one commitment of 32 bytes per mainchain block.
> > Continuous operation of the sidechain then implies a constant stream of
> 32-byte commitments, whereas continuous operation of a channel factory, in
> the absence of membership set changes, has 0 bytes per block being
> published.
> >
> > We assume that onboarding new members is much rarer than existing
> members actually paying each other in an actual economy (after the first
> burst of onboarding, new members will only arise in proportion to the birth
> rate, but typical economic transactions occur much more often), so
> optimizing for the continuous operation seems a better tradeoff.
>
> Perhaps more illustratively, with channel factories, different layers have
> different actions they can do, and the only one that needs to be broadcast
> widely are actions on the onchain layer:
>
> * Onchain: onboarding / deboarding
> * Channel Factory: channel topology change
> * Channel: payments
>
> This is in contrast with merge-mined Sidechains, where *all* activity
> requires a commitment on the mainchain:
>
> * Onchain: onboarding / deboarding, payments
>
> While it is true that all onboarding, deboarding, and payments are
> summarized in a single commitment, notice how in LN-with-channel-factories,
> all onboarding / deboarding is *also* summarized, but payments *have no
> onchain impact*, at all.
>
> Without channel factories, LN is only:
>
> * Onchain: onboarding / deboarding, channel topology change
> * Channel: payments
>
> So even without channel factories there is already a win, although again,
> due to the large numbers of channels we need, a channel factory in practice
> will be needed to get significantly better scaling.
>
>
> Finally, in practice with Drivechains, starting a new sidechain requires
> implicit permission from the miners.
> With LN, new channels and channel factories do not require any permission,
> as they are indistinguishable from ordinary transactions.
> (the gossip system does leak that a particular UTXO is a particular
> published channel, but gossip triggers after deep confirmation, at which
> point it would be too late for miners to censor the channel opening.
> The miners can censor channel closure for published channels, admittedly,
> but at least you can *start* a new channel without being censored, which
> you cannot do with Drivechain sidechains.)
>
>
> Regards,
> ZmnSCPxj
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 7453 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-27 16:59 ` Billy Tetrud
@ 2022-02-27 23:50 ` Paul Sztorc
0 siblings, 0 replies; 34+ messages in thread
From: Paul Sztorc @ 2022-02-27 23:50 UTC (permalink / raw)
To: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 6828 bytes --]
On 2/27/2022 11:59 AM, Billy Tetrud via bitcoin-dev wrote:
> @Paul
> > I think largeblocksidechainsshould be reconsidered:
> > * They are not a blocksize increase.
> This is short sighted. They would absolutely be a blocksize increase
> for those following a large block sidechain. While sure, it wouldn't
> affect bitcoin users who don't follow that sidechain, its misleading
> to call it "not a blocksize increase" for everyone.
Your larger explanation is entirely correct.
Many of the important anti-largeblock arguments are not relevant to the largeblock sidechain strategy, but some of them still are.
My concern is that people will jump to conclusions, and use the old 2015 arguments against "a blocksize increase" against this idea.
Hence my small bullet point.
> > * They allow users to be different. Some can pay more (for more decentralization), some less (for less decentralization).
> > gambling the entire future of BTC, on the premise that strong decentralization will always be needed at all points in time.
> Decentralization isn't just something where more is more valuable and less is less valuable. Decentralization is either enough to stop a class of attack or its not. Its pretty binary. If the decentralization is not enough, it would be a pretty huge catastrophe for those involved. Its pretty clear that making the blocksize eg 10 times larger is a poor design choice. So advocating for such a thing on a sidechain is just as bad as advocating for it on an altcoin.
> Even if people only put a couple satoshis in such a sidechain at a time, and don't feel the loss very much, the *world* would feel the loss. Eg if everyone had $1 in such a system, and someone stole it all, it would be a theft of billions of dollars. The fact that no individual would feel much pain would make it not much less harmful to society.
I believe that you have missed my point. Let me try to explain it in more detail.
First, imagine a magic spell is cast, which 100% prevents the "class of attack" which you mention. In that situation, all of the work that BTC does to remain decentralized, is a pure burden with absolutely no benefit whatsoever. Rational users will then become indifferent to centralization. Since decentralization has tradeoffs, users will tend to be drawn towards 'crypto' projects that have very low decentralization.
Next, imagine that the spell is lifted, and the attacks start. Users will be, of course, drawn back towards BTC, and they will appreciate it for its decentralization.
So what's the problem? Well, I believe that money has very strong network effects. Therefore, I believe that "user inertia" will be much stronger than usual. At a certain critical mass it may be insurmountable. So, at certain points along the spectrum, users will "clump up" and get "stuck".
Thus, we may "clump" on a chain that is not the most decentralized one. And an adversary can use this to their advantage. They can "grow" the centralized chain at first, to help it, and help ensure that they do not have to deal with the most decentralized chain.
This entire issue is avoided completely, if all the chains --decentralized and centralized-- and in the same monetary unit. Then, the monetary network effects never interfere, and the decentralized chain is always guaranteed to exist.
As for the phrase " Its pretty clear that making the blocksize eg 10 times larger is a poor design choice" I think this entire way of reasoning about the blocksize, is one that only applies to a non-sidechain world.
In contrast, in a world where many chains can be created, it does not matter what Some Guy thinks is "pretty clear". The only thing that matters is that people can try things out, are rewarded for successes, and are punished for mistakes.
So: if someone makes a largeblock sidechain, and the design is bad, the chain fails, and their reputation suffers.
In my way-of-reasoning, someone is actually in the wrong, if they proactively censor an experiment of any type. If a creator is willing to stand behind something, then it should be tried.
In fact, it is often best for everyone (especially the end user), if a creator keeps their ideas secret (from the "peer review" community). That way they can at least get credit/glory. The soon-to-be-successful experiments of tomorrow, should be incomprehensible to the experts of today. That's what makes them innovations.
Finally, to me it makes no difference if users have their funds stolen from a centralized Solana contract (because there is only one full node which the operator resets), or from a bip300 centralized bit-Solana sidechain (for the same reason). I don't see why the tears shed would be any different.
> > We can learn from past mistakes -- when a new largeblock sidechain is needed, we can make a new one from scratch, using everything we know.
> If there's some design principles that *allow* for safely increasing the blocksize substantially like that, then I'd advocate for it in bitcoin. But the goal of sidechains should not be "shoot from the hip and after everyone on that sidechain gets burned we'll have learned valuable lessons". That's not how engineering works. That's akin to wreckless human experimentation.
Again, we perhaps have a fundamental disagreement on this point.
In 2008 a FED chairman might have said to Satoshi, "If there were design principles that *allowed* for private, digital, bearer-instrument, payments, then of course I'd advocate for it here at the FED. But the goal of bitcoin should not be 'shoot from the hip ...'. That's not how engineering works. That's akin to wreckless human experimentation."
I think that the most dangerous experiment of all, is to adopt the 'reckless' policy of suppressing creativity.
If instead you said something like, "If a 10x blocksize chain is ever demonstrated to have property XYZ, then I will repent my error by putting my own children to death", then the audience would at least have some idea of your confidence and sincerity. But, again, a FED chairman could say exactly that, about Bitcoin. And they would still have been wrong. And even if they were right (on a fluke) they would still have been wrong to prevent the idea from being tried.
Censorship (the suppression of ideas, merely because you disagree with them) is not only immoral, on this issue it is also largely pointless. Today, a Bitcoiner can sell their BTC for Solana, or BSV, and there is nothing anyone here can do about it. Altcoin Solana vs bip300 bit-Solana, would seem to be equivalently reckless to me. So, your implicit advice (of bureaucracy-based sidechain drop/add), seems to fail to meet your own criterion (of preventing human recklessness). And it certainly does other bad things for no reason (pumps an altcoin, decreases btc fee revenues /hashrate, etc).
Paul
[-- Attachment #2: Type: text/html, Size: 8249 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-27 2:00 ` ZmnSCPxj
2022-02-27 7:25 ` ZmnSCPxj
@ 2022-02-28 0:20 ` Paul Sztorc
2022-02-28 6:49 ` ZmnSCPxj
1 sibling, 1 reply; 34+ messages in thread
From: Paul Sztorc @ 2022-02-28 0:20 UTC (permalink / raw)
To: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 2959 bytes --]
On 2/26/2022 9:00 PM, ZmnSCPxj wrote:
> ...
>> Such a technique would need to meet two requirements (or, so it seems to me):
>> #1: The layer1 UTXO (that defines the channel) can never change (ie, the 32-bytes which define the p2sh/tapscript/covenant/whatever, must stay what-they-were when the channel was opened).
>> #2: The new part-owners (who are getting coins from the rich man), will have new pubkeys which are NOT known, until AFTER the channel is opened and confirmed on the blockchain.
>>
>> Not sure how you would get both #1 and #2 at the same time. But I am not up to date on the latest LN research.
> Yes, using channel factories.
I think you may be wrong about this.
Channel factories do not meet requirement #2, as they cannot grow to onboard new users (ie, new pubkeys).
The factory-open requires that people pay to (for example), a 5-of-5 multisig. So all 5 fixed pubkeys must be known, before the factory-open is confirmed, not after.
> We assume that onboarding new members is much rarer than existing members actually paying each other
Imagine that Bitcoin could only onboard 5 new users per millennium, but once onboarded they had payment nirvana (could transact hundreds of trillions of times per second, privately, smart contracts, whatever).
Sadly, the payment nirvana would not matter. The low onboarding rate would kill the project.
The difference between the two rates [onboarding and payment], is not relevant. EACH rate must meet the design goal.
It is akin to saying: " Our car shifts from park to drive in one-millionth of a second, but it can only shift into reverse once per year; but that is OK because 'we assume that going in reverse is much rarer than driving forward' ".
> Continuous operation of the sidechain then implies a constant stream of 32-byte commitments, whereas continuous operation of a channel factory, in the absence of membership set changes, has 0 bytes per block being published.
That's true, but I think you have neglected to actually take out your calculator and run the numbers.
Hypothetically, 10 largeblock-sidechains would be 320 bytes per block (00.032%, essentially nothing).
Those 10, could onboard 33% of the planet in a single month [footnote], even if each sc-onboard required an average of 800 sc-bytes.
Certainly not a perfect idea, as the SC onboarding rate is the same as the payment rate. But once they are onboarded, those users can immediately join the LN *from* their sidechain. (All of the SC LNs would be interoperable.)
Such a strategy would take enormous pressure *off* of layer1 (relative to the "LN only" strategy). The layer1 blocksize could even **shrink** from 4 MB (wu) to 400 kb, or lower. That would cancel out the 320 bytes of overhead, many hundreds of times over.
Paul
[footnote] Envelope math, 10 sidechains, each 50 MB forever-fixed blocksize (which is a mere 12.5x our current 4M wu limit): 10 * 6*24*30 * ((50*1000*1000)/800) / 8.2 billion = .32926
[-- Attachment #2: Type: text/html, Size: 3654 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-28 0:20 ` Paul Sztorc
@ 2022-02-28 6:49 ` ZmnSCPxj
2022-02-28 7:55 ` vjudeu
2022-02-28 22:54 ` Paul Sztorc
0 siblings, 2 replies; 34+ messages in thread
From: ZmnSCPxj @ 2022-02-28 6:49 UTC (permalink / raw)
To: Paul Sztorc, Bitcoin Protocol Discussion
Good morning Paul,
> On 2/26/2022 9:00 PM, ZmnSCPxj wrote:
>
> > ...
> >
> > > Such a technique would need to meet two requirements (or, so it seems to me):
> > > #1: The layer1 UTXO (that defines the channel) can never change (ie, the 32-bytes which define the p2sh/tapscript/covenant/whatever, must stay what-they-were when the channel was opened).
> > > #2: The new part-owners (who are getting coins from the rich man), will have new pubkeys which are NOT known, until AFTER the channel is opened and confirmed on the blockchain.
> > >
> > > Not sure how you would get both #1 and #2 at the same time. But I am not up to date on the latest LN research.
> >
> > Yes, using channel factories.
>
> I think you may be wrong about this.
> Channel factories do not meet requirement #2, as they cannot grow to onboard new users (ie, new pubkeys).
> The factory-open requires that people pay to (for example), a 5-of-5 multisig. So all 5 fixed pubkeys must be known, before the factory-open is confirmed, not after.
I am not wrong about this.
You can cut-through the closure of one channel factory with the opening of another channel factory with the same 5 fixed pubkeys *plus* an additional 100 new fixed pubkeys.
With `SIGHASH_ANYPREVOUT` (which we need to Decker-Russell-Osuntokun-based channel factories) you do not even need to make new signatures for the existing channels, you just reuse the existing channel signatures and whether or not the *single*, one-input-one-output, close+reopen transaction is confirmed or not, the existing channels remain usable (the signatures can be used on both pre-reopen and post-reopen).
That is why I said changing the membership set requires onchain action.
But the onchain action is *only* a 1-input-1-output transaction, and with Taproot the signature needed is just 64 bytes witness (1 weight unit per byte), I had several paragraphs describing that, did you not read them?
Note as well that with sidechains, onboarding also requires action on the mainchain, in the form of a sideblock merge-mined on the mainchain.
>
> > We assume that onboarding new members is much rarer than existing members actually paying each other
>
> Imagine that Bitcoin could only onboard 5 new users per millennium, but once onboarded they had payment nirvana (could transact hundreds of trillions of times per second, privately, smart contracts, whatever).
> Sadly, the payment nirvana would not matter. The low onboarding rate would kill the project.
Fortunately even without channel factories the onboarding rate of LN is much much higher than that.
I mean, like, LN *is* live and *is* working, today, and (at least where I have looked, but I could be provincial) has a lot more onboarding activity than half-hearted sidechains like Liquid or Rootstock.
> The difference between the two rates [onboarding and payment], is not relevant. EACH rate must meet the design goal.
> It is akin to saying: " Our car shifts from park to drive in one-millionth of a second, but it can only shift into reverse once per year; but that is OK because 'we assume that going in reverse is much rarer than driving forward' ".
Your numbers absolutely suck and have no basis in reality, WTF.
Even without batched channel openings and a typical tranaction of 2 inputs, 1 LN channel, and a change output, you can onboard ~1250 channels per mainchain block (admittedly, without any other activity).
Let us assume every user needs 5 channels on average and that is still 250 users per 10 minutes.
I expect channel factories to increase that by about 10x to 100x more, and then you are going to hit the issue of getting people to *use* Bitcoin rather than many users wanting to get in but being unable to due to block size limits.
>
> > Continuous operation of the sidechain then implies a constant stream of 32-byte commitments, whereas continuous operation of a channel factory, in the absence of membership set changes, has 0 bytes per block being published.
>
> That's true, but I think you have neglected to actually take out your calculator and run the numbers.
>
> Hypothetically, 10 largeblock-sidechains would be 320 bytes per block (00.032%, essentially nothing).
> Those 10, could onboard 33% of the planet in a single month [footnote], even if each sc-onboard required an average of 800 sc-bytes.
>
> Certainly not a perfect idea, as the SC onboarding rate is the same as the payment rate. But once they are onboarded, those users can immediately join the LN *from* their sidechain. (All of the SC LNs would be interoperable.)
>
> Such a strategy would take enormous pressure *off* of layer1 (relative to the "LN only" strategy). The layer1 blocksize could even **shrink** from 4 MB (wu) to 400 kb, or lower. That would cancel out the 320 bytes of overhead, many hundreds of times over.
>
> Paul
>
> [footnote] Envelope math, 10 sidechains, each 50 MB forever-fixed blocksize (which is a mere 12.5x our current 4M wu limit): 10 * 6*24*30 * ((50*1000*1000)/800) / 8.2 billion = .32926
Yes, and 33% of the planet want to use Bitcoin in the next month.
The onboarding rate only needs to be as fast as the rate at which people want to join Bitcoin, and any security you sacrifice in order to get a higher number than that is security you are sacrificing needlessly for extra capacity you are unable to utilize.
As I pointed out in the other thread:
* LN:
* Funds can be stolen IF:
* There is a 51% miner, AND
* The 51% miner is a member of a channel/channel factory you are in.
* Drivechains:
* Funds can be stolen IF:
* There is a 51% miner.
Now of course there is always the possibility that the 51% miner is in *every* channel factory globally.
But there is also the possibility that the 51% miner exists, but is *not* on every channel factory.
Indeed, for any arbitrary channel or factory, I expect that the probability of the 51% miner being a member is less than 100%, thus the combined probability is lower than Drivechains.
So there is a real degradation of security in Drivechains, and if you compute the numbers, I am reasonably sure that 33% of the world is unlikely to want to use Bitcoin within one month.
I mean we already had a pandemic and everyone going online and so on, and yet Bitcoin blockchain feerates are *still* small, I had to fix a bug in CLBOSS that came up only due to hitting the minimum feerate, so no --- people are not joining Bitcoin at a rate faster than Bitcoin + LN can handle it, even with a pretty good reason to move payments online.
Worse, once 100% of the world is onboarded, the extra onboarding capacity is useless since the onboarding rate can only match the birth rate (including birth of legal persons such as corporations), which we expect is much lower than 33% increase per ***month***.
You are buying too much capacity at a real degradation in security, and I am not convinced the extra capacity is worth the loss of security.
Separating the onboarding rate from the payment rate is a *good thing*, because we can then design their structures differently.
Make onboarding slow but secure (so that their money is very secure), but make payment rate faster and less secure (because in-flight payments are likely to be much smaller than the total owned funds).
Regards,
ZmnSCPxj
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-28 6:49 ` ZmnSCPxj
@ 2022-02-28 7:55 ` vjudeu
2022-03-04 8:42 ` ZmnSCPxj
2022-02-28 22:54 ` Paul Sztorc
1 sibling, 1 reply; 34+ messages in thread
From: vjudeu @ 2022-02-28 7:55 UTC (permalink / raw)
To: ZmnSCPxj, Paul Sztorc, Bitcoin Protocol Discussion
> Continuous operation of the sidechain then implies a constant stream of 32-byte commitments, whereas continuous operation of a channel factory, in the absence of membership set changes, has 0 bytes per block being published.
The sidechain can push zero bytes on-chain, just by placing a sidechain hash in OP_RETURN inside TapScript. Then, every sidechain node can check that "this sidechain hash is connected with this Taproot address", without pushing 32 bytes on-chain.
On 2022-02-28 08:13:03 user ZmnSCPxj via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
> Good morning Paul,
> On 2/26/2022 9:00 PM, ZmnSCPxj wrote:
>
> > ...
> >
> > > Such a technique would need to meet two requirements (or, so it seems to me):
> > > #1: The layer1 UTXO (that defines the channel) can never change (ie, the 32-bytes which define the p2sh/tapscript/covenant/whatever, must stay what-they-were when the channel was opened).
> > > #2: The new part-owners (who are getting coins from the rich man), will have new pubkeys which are NOT known, until AFTER the channel is opened and confirmed on the blockchain.
> > >
> > > Not sure how you would get both #1 and #2 at the same time. But I am not up to date on the latest LN research.
> >
> > Yes, using channel factories.
>
> I think you may be wrong about this.
> Channel factories do not meet requirement #2, as they cannot grow to onboard new users (ie, new pubkeys).
> The factory-open requires that people pay to (for example), a 5-of-5 multisig. So all 5 fixed pubkeys must be known, before the factory-open is confirmed, not after.
I am not wrong about this.
You can cut-through the closure of one channel factory with the opening of another channel factory with the same 5 fixed pubkeys *plus* an additional 100 new fixed pubkeys.
With `SIGHASH_ANYPREVOUT` (which we need to Decker-Russell-Osuntokun-based channel factories) you do not even need to make new signatures for the existing channels, you just reuse the existing channel signatures and whether or not the *single*, one-input-one-output, close+reopen transaction is confirmed or not, the existing channels remain usable (the signatures can be used on both pre-reopen and post-reopen).
That is why I said changing the membership set requires onchain action.
But the onchain action is *only* a 1-input-1-output transaction, and with Taproot the signature needed is just 64 bytes witness (1 weight unit per byte), I had several paragraphs describing that, did you not read them?
Note as well that with sidechains, onboarding also requires action on the mainchain, in the form of a sideblock merge-mined on the mainchain.
>
> > We assume that onboarding new members is much rarer than existing members actually paying each other
>
> Imagine that Bitcoin could only onboard 5 new users per millennium, but once onboarded they had payment nirvana (could transact hundreds of trillions of times per second, privately, smart contracts, whatever).
> Sadly, the payment nirvana would not matter. The low onboarding rate would kill the project.
Fortunately even without channel factories the onboarding rate of LN is much much higher than that.
I mean, like, LN *is* live and *is* working, today, and (at least where I have looked, but I could be provincial) has a lot more onboarding activity than half-hearted sidechains like Liquid or Rootstock.
> The difference between the two rates [onboarding and payment], is not relevant. EACH rate must meet the design goal.
> It is akin to saying: " Our car shifts from park to drive in one-millionth of a second, but it can only shift into reverse once per year; but that is OK because 'we assume that going in reverse is much rarer than driving forward' ".
Your numbers absolutely suck and have no basis in reality, WTF.
Even without batched channel openings and a typical tranaction of 2 inputs, 1 LN channel, and a change output, you can onboard ~1250 channels per mainchain block (admittedly, without any other activity).
Let us assume every user needs 5 channels on average and that is still 250 users per 10 minutes.
I expect channel factories to increase that by about 10x to 100x more, and then you are going to hit the issue of getting people to *use* Bitcoin rather than many users wanting to get in but being unable to due to block size limits.
>
> > Continuous operation of the sidechain then implies a constant stream of 32-byte commitments, whereas continuous operation of a channel factory, in the absence of membership set changes, has 0 bytes per block being published.
>
> That's true, but I think you have neglected to actually take out your calculator and run the numbers.
>
> Hypothetically, 10 largeblock-sidechains would be 320 bytes per block (00.032%, essentially nothing).
> Those 10, could onboard 33% of the planet in a single month [footnote], even if each sc-onboard required an average of 800 sc-bytes.
>
> Certainly not a perfect idea, as the SC onboarding rate is the same as the payment rate. But once they are onboarded, those users can immediately join the LN *from* their sidechain. (All of the SC LNs would be interoperable.)
>
> Such a strategy would take enormous pressure *off* of layer1 (relative to the "LN only" strategy). The layer1 blocksize could even **shrink** from 4 MB (wu) to 400 kb, or lower. That would cancel out the 320 bytes of overhead, many hundreds of times over.
>
> Paul
>
> [footnote] Envelope math, 10 sidechains, each 50 MB forever-fixed blocksize (which is a mere 12.5x our current 4M wu limit): 10 * 6*24*30 * ((50*1000*1000)/800) / 8.2 billion = .32926
Yes, and 33% of the planet want to use Bitcoin in the next month.
The onboarding rate only needs to be as fast as the rate at which people want to join Bitcoin, and any security you sacrifice in order to get a higher number than that is security you are sacrificing needlessly for extra capacity you are unable to utilize.
As I pointed out in the other thread:
* LN:
* Funds can be stolen IF:
* There is a 51% miner, AND
* The 51% miner is a member of a channel/channel factory you are in.
* Drivechains:
* Funds can be stolen IF:
* There is a 51% miner.
Now of course there is always the possibility that the 51% miner is in *every* channel factory globally.
But there is also the possibility that the 51% miner exists, but is *not* on every channel factory.
Indeed, for any arbitrary channel or factory, I expect that the probability of the 51% miner being a member is less than 100%, thus the combined probability is lower than Drivechains.
So there is a real degradation of security in Drivechains, and if you compute the numbers, I am reasonably sure that 33% of the world is unlikely to want to use Bitcoin within one month.
I mean we already had a pandemic and everyone going online and so on, and yet Bitcoin blockchain feerates are *still* small, I had to fix a bug in CLBOSS that came up only due to hitting the minimum feerate, so no --- people are not joining Bitcoin at a rate faster than Bitcoin + LN can handle it, even with a pretty good reason to move payments online.
Worse, once 100% of the world is onboarded, the extra onboarding capacity is useless since the onboarding rate can only match the birth rate (including birth of legal persons such as corporations), which we expect is much lower than 33% increase per ***month***.
You are buying too much capacity at a real degradation in security, and I am not convinced the extra capacity is worth the loss of security.
Separating the onboarding rate from the payment rate is a *good thing*, because we can then design their structures differently.
Make onboarding slow but secure (so that their money is very secure), but make payment rate faster and less secure (because in-flight payments are likely to be much smaller than the total owned funds).
Regards,
ZmnSCPxj
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-28 7:55 ` vjudeu
@ 2022-03-04 8:42 ` ZmnSCPxj
2022-03-04 13:43 ` vjudeu
0 siblings, 1 reply; 34+ messages in thread
From: ZmnSCPxj @ 2022-03-04 8:42 UTC (permalink / raw)
To: vjudeu; +Cc: Bitcoin Protocol Discussion
Good morning vjudeu,
> > Continuous operation of the sidechain then implies a constant stream of 32-byte commitments, whereas continuous operation of a channel factory, in the absence of membership set changes, has 0 bytes per block being published.
>
> The sidechain can push zero bytes on-chain, just by placing a sidechain hash in OP_RETURN inside TapScript. Then, every sidechain node can check that "this sidechain hash is connected with this Taproot address", without pushing 32 bytes on-chain.
The Taproot address itself has to take up 32 bytes onchain, so this saves nothing.
Regards,
ZmnSCPxj
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-03-04 8:42 ` ZmnSCPxj
@ 2022-03-04 13:43 ` vjudeu
0 siblings, 0 replies; 34+ messages in thread
From: vjudeu @ 2022-03-04 13:43 UTC (permalink / raw)
To: ZmnSCPxj; +Cc: Bitcoin Protocol Discussion
> The Taproot address itself has to take up 32 bytes onchain, so this saves nothing.
There is always at least one address, because you have a coinbase transaction and a solo miner or mining pool that is getting the whole reward. So, instead of using separate OP_RETURN's for each sidechain, for each federation, and for every "commitment to the blockchain", all we need is just tweaking that miner's key and placing everything inside unused TapScript. Then, we don't need separate 32 bytes for this and separate 32 bytes for that, we only need a commitment and a MAST-based path that can link such commitment to the address of this miner.
So, instead of having:
<coinbasePubkey>
<opReturn1>
<opReturn2>
...
<opReturnN>
We could have:
<tweakedCoinbasePubkey>
On 2022-03-04 09:42:23 user ZmnSCPxj <ZmnSCPxj@protonmail.com> wrote:
> Good morning vjudeu,
> > Continuous operation of the sidechain then implies a constant stream of 32-byte commitments, whereas continuous operation of a channel factory, in the absence of membership set changes, has 0 bytes per block being published.
>
> The sidechain can push zero bytes on-chain, just by placing a sidechain hash in OP_RETURN inside TapScript. Then, every sidechain node can check that "this sidechain hash is connected with this Taproot address", without pushing 32 bytes on-chain.
The Taproot address itself has to take up 32 bytes onchain, so this saves nothing.
Regards,
ZmnSCPxj
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-28 6:49 ` ZmnSCPxj
2022-02-28 7:55 ` vjudeu
@ 2022-02-28 22:54 ` Paul Sztorc
2022-03-01 5:39 ` Billy Tetrud
1 sibling, 1 reply; 34+ messages in thread
From: Paul Sztorc @ 2022-02-28 22:54 UTC (permalink / raw)
To: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 5464 bytes --]
On 2/28/2022 1:49 AM, ZmnSCPxj wrote:
> ...
>>>> ...
>>>>
>>>> Perhaps, someone will invent a way, to LN-onboard WITHOUT needing new layer1 bytes.
>>>>
>>>> If so, a "rich man" could open a LN channel, and gradually transfer it to new people.
>>>>
>>>> Such a technique would need to meet two requirements (or, so it seems to me):
>>>> #1: The layer1 UTXO (that defines the channel) can never change (ie, the 32-bytes which define the p2sh/tapscript/covenant/whatever, must stay what-they-were when the channel was opened).
>>>> #2: The new part-owners (who are getting coins from the rich man), will have new pubkeys which are NOT known, until AFTER the channel is opened and confirmed on the blockchain.
>>>>
>>>> Not sure how you would get both #1 and #2 at the same time. But I am not up to date on the latest LN research.
>>> Yes, using channel factories.
>> I think you may be wrong about this.
>> ...
> I am not wrong about this.
Well, let's take a closer look then.
The topic was: "a way, to LN-onboard [a new pubkey] WITHOUT needing new layer1 bytes".
By which I meant, that I could generate a new pubkey right now, and add it to the LN, without any onchain action.
I can shorten and restate the two requirements (and reorder them) as:
#2: Can later add a new public key to the membership set.
#1: Without an onchain action.
And yet you yourself say, very clearly:
> ... That is why I said changing the membership set requires onchain action.
Which would seem to directly contradict what you say about channel factories.
Unless you can show me how to add my new pubkey_4, to a 3-of-3 channel factory opened last year. Without using an onchain action.
You seem to want to instead change the subject. (To something like: 'we can do better the rate (32 bytes per 5 onboards), from your footnote'.)
Which is fine. But it is not what I bought up.
***
In general, you seem to have a future in mind, where new users onboard via factory.
For example, 50,000 new users want to onboard in the next block. These strangers, spontaneously organize into 1000 factories of 55 people each, (50 newbies with zero coins + 5 wealthier BTCs who have lots of coins). They then broadcast into the block and join Bitcoin.
And this one factory provides them with many channels, so it can meet most/all of their needs.
I am not here to critique factories. I was simply observing that your logic "sidechains don't scale, because you have to share your messages" is not quite airtight, because in the case of onboarding the situation is reversed and so supports the exact opposite conclusion.
I believe I have made my point by now. It should be easy for people to see what each of us has in mind, and the strengths and weaknesses.
I am curious about something, though. Maybe you can help me.
Presumably there are risks to large factories. Perhaps an attacker could join each new factory with just $1 of BTC, spend this $1, and then refuse to cooperate with the factory any further. Thus they can disable the factory at a cost of $1 rented dollar.
If 1000 factories are opened per block, this would be 52.5 M factories per year, $52.5 million USD per year to disable all the factories out of spite. (All of which they would eventually get back.) I can think of a few people who might try it.
> I mean, like, LN ... has a lot more onboarding activity than half-hearted sidechains like Liquid or Rootstock.
I don't see the relevance of this. We are talking about the future (theoretical), not the past (empirical).
For example, someone could say "Ethereum has a lot more onboarding activity than LN ..." but this would also make no difference to anything.
> ...The onboarding rate only needs to be as fast as the rate at which people want to join Bitcoin.
> ...
>
> As I pointed out in the other thread:
>
> * LN:
> * Funds can be stolen IF:
> * There is a 51% miner, AND
> * The 51% miner is a member of a channel/channel factory you are in.
> * Drivechains:
> * Funds can be stolen IF:
> * There is a 51% miner.
> ...
> So there is a real degradation of security in Drivechains, and if you compute the numbers, I am reasonably sure that 33% of the world is unlikely to want to use Bitcoin within one month.
> I mean we already had a pandemic and everyone going online and so on, and yet Bitcoin blockchain feerates are *still* small, I had to fix a bug in CLBOSS that came up only due to hitting the minimum feerate, so no --- people are not joining Bitcoin at a rate faster than Bitcoin + LN can handle it, even with a pretty good reason to move payments online.
>
> Worse, once 100% of the world is onboarded, the extra onboarding capacity is useless since the onboarding rate can only match the birth rate (including birth of legal persons such as corporations), which we expect is much lower than 33% increase per ***month***.
>
> You are buying too much capacity at a real degradation in security, and I am not convinced the extra capacity is worth the loss of security.
>
> Separating the onboarding rate from the payment rate is a *good thing*, because we can then design their structures differently.
> Make onboarding slow but secure (so that their money is very secure), but make payment rate faster and less secure (because in-flight payments are likely to be much smaller than the total owned funds).
Obviously I don't agree with any of these sentences (most are irrelevant, some false). But I would only be repeating myself.
Paul
[-- Attachment #2: Type: text/html, Size: 6590 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-28 22:54 ` Paul Sztorc
@ 2022-03-01 5:39 ` Billy Tetrud
2022-03-02 0:00 ` Paul Sztorc
0 siblings, 1 reply; 34+ messages in thread
From: Billy Tetrud @ 2022-03-01 5:39 UTC (permalink / raw)
To: Paul Sztorc, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 7969 bytes --]
@Paul
> I believe that money has very strong network effects. ... users will
"clump up" and get "stuck".
I'm of the same opinion.
> This entire issue is avoided completely, if all the chains
--decentralized and centralized-- and in the same monetary unit. Then, the
monetary network effects never interfere, and the decentralized chain is
always guaranteed to exist.
It sounds like what you're saying is that without side chains, everyone
might switch entirely to some altcoin and bitcoin will basically die. And
at that point, the insecurity of that coin people switched to can be
heavily exploited by some attacker(s). Is that right? Its an interesting
thought experiment. However, it leads me to wonder: if a sidechain gets so
popular that it dominates the main chain, why would people keep that main
chain around at all? A sidechain could eject the main chain and all its
baggage if it got so big. So I don't think it can really be said that the
problem can be avoided "completely". But in any case, I see your line of
thinking.
> someone is actually in the wrong, if they proactively censor an
experiment of any type. If a creator is willing to stand behind something,
then it should be tried.
> it makes no difference if users have their funds stolen from a
centralized Solana contract or from a bip300 centralized bit-Solana
sidechain. I don't see why the tears shed would be any different.
I agree with you. My point was not that we should stop anyone from doing
this. My point was only that we shouldn't advocate for ideas we think
aren't good. You were advocating for a "largeblock sidechain", and unless
you have good reasons to think that is an idea likely to succeed and want
to share them with us, then you shouldn't be advocating for that. But
certainly if someone *does* think so and has their own reasons, I wouldn't
want to censor or stop them. But I wouldn't advocate for them to do it
unless their ideas were convincing to me, because I know enough to know the
dangers of large block blockchains.
On Mon, Feb 28, 2022 at 4:55 PM Paul Sztorc via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> On 2/28/2022 1:49 AM, ZmnSCPxj wrote:
>
> ...
>
> ...
>
> Perhaps, someone will invent a way, to LN-onboard WITHOUT needing new layer1 bytes.
>
> If so, a "rich man" could open a LN channel, and gradually transfer it to new people.
>
> Such a technique would need to meet two requirements (or, so it seems to me):
> #1: The layer1 UTXO (that defines the channel) can never change (ie, the 32-bytes which define the p2sh/tapscript/covenant/whatever, must stay what-they-were when the channel was opened).
> #2: The new part-owners (who are getting coins from the rich man), will have new pubkeys which are NOT known, until AFTER the channel is opened and confirmed on the blockchain.
>
> Not sure how you would get both #1 and #2 at the same time. But I am not up to date on the latest LN research.
>
> Yes, using channel factories.
>
> I think you may be wrong about this.
> ...
>
> I am not wrong about this.
>
> Well, let's take a closer look then.
>
> The topic was: "a way, to LN-onboard [a new pubkey] WITHOUT needing new layer1 bytes".
>
> By which I meant, that I could generate a new pubkey right now, and add it to the LN, without any onchain action.
>
> I can shorten and restate the two requirements (and reorder them) as:
> #2: Can later add a new public key to the membership set.
> #1: Without an onchain action.
>
> And yet you yourself say, very clearly:
>
>
> ... That is why I said changing the membership set requires onchain action.
>
> Which would seem to directly contradict what you say about channel
> factories. Unless you can show me how to add my new pubkey_4, to a 3-of-3
> channel factory opened last year. Without using an onchain action. You seem
> to want to instead change the subject. (To something like: 'we can do
> better the rate (32 bytes per 5 onboards), from your footnote'.) Which is
> fine. But it is not what I bought up.
>
> ***
>
> In general, you seem to have a future in mind, where new users onboard via factory.
> For example, 50,000 new users want to onboard in the next block. These strangers, spontaneously organize into 1000 factories of 55 people each, (50 newbies with zero coins + 5 wealthier BTCs who have lots of coins). They then broadcast into the block and join Bitcoin.
> And this one factory provides them with many channels, so it can meet most/all of their needs.
>
> I am not here to critique factories. I was simply observing that your logic "sidechains don't scale, because you have to share your messages" is not quite airtight, because in the case of onboarding the situation is reversed and so supports the exact opposite conclusion.
> I believe I have made my point by now. It should be easy for people to see what each of us has in mind, and the strengths and weaknesses.
>
> I am curious about something, though. Maybe you can help me.
> Presumably there are risks to large factories. Perhaps an attacker could join each new factory with just $1 of BTC, spend this $1, and then refuse to cooperate with the factory any further. Thus they can disable the factory at a cost of $1 rented dollar.
> If 1000 factories are opened per block, this would be 52.5 M factories per year, $52.5 million USD per year to disable all the factories out of spite. (All of which they would eventually get back.) I can think of a few people who might try it.
>
>
> I mean, like, LN ... has a lot more onboarding activity than half-hearted sidechains like Liquid or Rootstock.
>
> I don't see the relevance of this. We are talking about the future
> (theoretical), not the past (empirical). For example, someone could say
> "Ethereum has a lot more onboarding activity than LN ..." but this would
> also make no difference to anything.
>
> ...The onboarding rate only needs to be as fast as the rate at which people want to join Bitcoin.
> ...
>
> As I pointed out in the other thread:
>
> * LN:
> * Funds can be stolen IF:
> * There is a 51% miner, AND
> * The 51% miner is a member of a channel/channel factory you are in.
> * Drivechains:
> * Funds can be stolen IF:
> * There is a 51% miner.
> ...
> So there is a real degradation of security in Drivechains, and if you compute the numbers, I am reasonably sure that 33% of the world is unlikely to want to use Bitcoin within one month.
> I mean we already had a pandemic and everyone going online and so on, and yet Bitcoin blockchain feerates are *still* small, I had to fix a bug in CLBOSS that came up only due to hitting the minimum feerate, so no --- people are not joining Bitcoin at a rate faster than Bitcoin + LN can handle it, even with a pretty good reason to move payments online.
>
> Worse, once 100% of the world is onboarded, the extra onboarding capacity is useless since the onboarding rate can only match the birth rate (including birth of legal persons such as corporations), which we expect is much lower than 33% increase per ***month***.
>
> You are buying too much capacity at a real degradation in security, and I am not convinced the extra capacity is worth the loss of security.
>
> Separating the onboarding rate from the payment rate is a *good thing*, because we can then design their structures differently.
> Make onboarding slow but secure (so that their money is very secure), but make payment rate faster and less secure (because in-flight payments are likely to be much smaller than the total owned funds).
>
> Obviously I don't agree with any of these sentences (most are irrelevant, some false). But I would only be repeating myself.
>
> Paul
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 9966 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-03-01 5:39 ` Billy Tetrud
@ 2022-03-02 0:00 ` Paul Sztorc
2022-03-04 12:35 ` Billy Tetrud
0 siblings, 1 reply; 34+ messages in thread
From: Paul Sztorc @ 2022-03-02 0:00 UTC (permalink / raw)
To: Billy Tetrud, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 5128 bytes --]
On 3/1/2022 12:39 AM, Billy Tetrud wrote:
>> This entire issue is avoided completely, if all the chains
>> --decentralized and centralized-- and in the same monetary unit.
>> Then, the monetary network effects never interfere, and the
>> decentralized chain is always guaranteed to exist.
> It sounds like what you're saying is that without side chains,
> everyone might switch entirely to some altcoin and bitcoin will
> basically die. And at that point, the insecurity of that coin people
> switched to can be heavily exploited by some attacker(s). Is that right?
Yes, precisely.
> Its an interesting thought experiment. However, it leads me to wonder:
> if a sidechain gets so popular that it dominates the main chain, why
> would people keep that main chain around at all?
For some reason, this is a very popular question. I suppose if you believe in "one size fits all" chain philosophy (see comment below), it makes sense to say "these sidechains are terrible" on Monday and then "these sidechains are so good they will replace the mainchain" on Tuesday.
In any event, sidechains cannot exist without their mainchain (as I see it). For example, imagine that you are on a zcash sidechain, and someone claims they deposited 1000 BTC, from Bitcoin Core into this sidechain? Do you give them 1000 z-BTC, or not? Without the mainchain,
you can't tell.
If you run the Bip300 DriveNet demo software (drivechain.info/releases), you will see for yourself: the test-sidechains are absolutely inert, UNTIL they have rpc access to the mainchain. (Exactly the same way that a LN node needs a Bitcoin Core node.)
> > someone is actually in the wrong, if they proactively censor an
> experiment of any type. If a creator is willing to stand behind
> something, then it should be tried.
> > it makes no difference if users have their funds stolen from a
> centralized Solana contract or from a bip300 centralized bit-Solana
> sidechain. I don't see why the tears shed would be any different.
> I agree with you. My point was not that we should stop anyone from
> doing this. My point was only that we shouldn't advocate for ideas we
> think aren't good. You were advocating for a "largeblock sidechain",
> and unless you have good reasons to think that is an idea likely to
> succeed and want to share them with us, then you shouldn't be
> advocating for that. But certainly if someone *does* think so and has
> their own reasons, I wouldn't want to censor or stop them. But I
> wouldn't advocate for them to do it unless their ideas were convincing
> to me, because I know enough to know the dangers of large block
> blockchains.
Yes, I strongly agree, that we should only advocate for ideas we believe in.
I do not believe in naive layer1 largeblockerism. But I do believe in sidechain largeblockism.
Something funny once happened to me when I was on a Bitcoin conference panel*. There were three people: myself, a Blockstream person, and an (ex)BitPay person. The first two of us, were valiantly defending the small block position. I gave my usual speech: that node costs must remain low, so that people can run full nodes. The largeblocker mentioned that they ran many nodes (including BCH nodes etc) and didn't mind the cost, so I disclosed --in a good-natured way-- that I do not even run a BTC full node myself (out of choice). Thus, I was yammering about software I wasn't even running, I had no skin in the game! Lo and behold -- my Blockstream smallblocker ally-on-the-panel, immediately admitted to everyone that he did not run a full node either. The only node-runner was the largeblocker. The audience found this very amusing (as did I).
We smallblockers, justified our sinful nodeless behavior, as follows (paraphrasing): we receive BTC mainly from people that we know (and have a long-term relationship with); our receipts are not time sensitive; we are not paid in BTC that often; if payments turned out to be forged we would have enormous recourse against our counterparties; etc.
We did not run full nodes, because we did not need to draw on the blockchain's powers, **for those transactions**.
Which is my point: people are different, and transactions are different. I make many transactions today, with VISA or Venmo. These are not censorship-resistant, but somehow I survive the month, without bursting into flames.
Wouldn't life be better, if we Bitcoiners could easily sweep those fiat transactions into *some* part of the BTC universe? (For example, a family of largeblock sidechains). To me the answer is clearly yes.
Unlike layer1-largeblockism, no one running Bitcoin Core ever needs to see these 'btc' transactions (the same as we don't see them today, on account of them not existing at all); they do not burden Bitcoin Core full nodes. Hence why it seems like a good idea to me.
An SPV-wallet-of-a-largeblock-sidechain, is of course, a *disgrace* compared to a full-node-of-smallblock-mainchain-Bitcoin-Core. But, it is emphatically superior to Venmo / VISA or even "custodial LN". And certainly superior to nothing.
Paul
*https://www.youtube.com/watch?v=V3cvH2eWqfU
[-- Attachment #2: Type: text/html, Size: 6555 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-03-02 0:00 ` Paul Sztorc
@ 2022-03-04 12:35 ` Billy Tetrud
2022-03-04 20:06 ` Paul Sztorc
0 siblings, 1 reply; 34+ messages in thread
From: Billy Tetrud @ 2022-03-04 12:35 UTC (permalink / raw)
To: Paul Sztorc; +Cc: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 7152 bytes --]
> "these sidechains are terrible" on Monday and then "these sidechains are
so good they will replace the mainchain" on Tuesday
Your premise is that a sidechain might come to dominate bitcoin, and that
this would be better than an altcoin dominating bitcoin. Did I
misunderstand you? Not quite sure why you're balking at me simply
confirming your premise.
> sidechains cannot exist without their mainchain .. imagine .. a zcash
sidechain, and someone claims they deposited 1000 BTC
A sidechain could stop supporting deposits from or withdrawals to bitcoin
and completely break any relationship with the main chain. I agree this is
not as sure of a thing as starting with an altcoin (which of course never
has that kind of relationship with bitcoin). So I do think there are some
merits to sidechains in your scenario. However, I don't think its quite
accurate to say it completely solves the problem (of a less-secure altcoin
becoming dominant).
Your anecdote about not running a full node is amusing, and I've often
found myself in that position. I certainly agree different people are
different and so different trade offs can be better for different
people. However,
the question is: what tradeoffs does a largeblock sidechain do better than
both eg Visa and lightning?
>Wouldn't life be better, if we Bitcoiners could easily sweep those fiat transactions into *some* part of the BTC universe? (For example, a family of largeblock sidechains). To me the answer is clearly yes.
I guess its not as clear to me. We agree it wouldn't significantly burden
Bitcoin-only nodes, but not being a burden is not a sufficient reason to do
something, only reason to not prevent it. But what are the benefits to a
user of that chain? Slightly lower fees than main bitcoin? More
decentralization than Visa or Venmo? Doesn't lightning already do better on
both accounts?
On Tue, Mar 1, 2022 at 6:00 PM Paul Sztorc <truthcoin@gmail.com> wrote:
> On 3/1/2022 12:39 AM, Billy Tetrud wrote:
>
> This entire issue is avoided completely, if all the chains --decentralized and centralized-- and in the same monetary unit. Then, the monetary network effects never interfere, and the decentralized chain is always guaranteed to exist.
>
> It sounds like what you're saying is that without side chains, everyone might switch entirely to some altcoin and bitcoin will basically die. And at that point, the insecurity of that coin people switched to can be heavily exploited by some attacker(s). Is that right?
>
> Yes, precisely.
>
> Its an interesting thought experiment. However, it leads me to wonder: if a sidechain gets so popular that it dominates the main chain, why would people keep that main chain around at all?
>
> For some reason, this is a very popular question. I suppose if you believe in "one size fits all" chain philosophy (see comment below), it makes sense to say "these sidechains are terrible" on Monday and then "these sidechains are so good they will replace the mainchain" on Tuesday.
>
> In any event, sidechains cannot exist without their mainchain (as I see it). For example, imagine that you are on a zcash sidechain, and someone claims they deposited 1000 BTC, from Bitcoin Core into this sidechain? Do you give them 1000 z-BTC, or not? Without the mainchain,
> you can't tell.
>
> If you run the Bip300 DriveNet demo software (drivechain.info/releases), you will see for yourself: the test-sidechains are absolutely inert, UNTIL they have rpc access to the mainchain. (Exactly the same way that a LN node needs a Bitcoin Core node.)
>
>
>
> > someone is actually in the wrong, if they proactively censor an experiment of any type. If a creator is willing to stand behind something, then it should be tried.
>
> > it makes no difference if users have their funds stolen from a centralized Solana contract or from a bip300 centralized bit-Solana sidechain. I don't see why the tears shed would be any different.
>
> I agree with you. My point was not that we should stop anyone from doing this. My point was only that we shouldn't advocate for ideas we think aren't good. You were advocating for a "largeblock sidechain", and unless you have good reasons to think that is an idea likely to succeed and want to share them with us, then you shouldn't be advocating for that. But certainly if someone *does* think so and has their own reasons, I wouldn't want to censor or stop them. But I wouldn't advocate for them to do it unless their ideas were convincing to me, because I know enough to know the dangers of large block blockchains.
>
> Yes, I strongly agree, that we should only advocate for ideas we believe in.
>
> I do not believe in naive layer1 largeblockerism. But I do believe in sidechain largeblockism.
>
> Something funny once happened to me when I was on a Bitcoin conference panel*. There were three people: myself, a Blockstream person, and an (ex)BitPay person. The first two of us, were valiantly defending the small block position. I gave my usual speech: that node costs must remain low, so that people can run full nodes. The largeblocker mentioned that they ran many nodes (including BCH nodes etc) and didn't mind the cost, so I disclosed --in a good-natured way-- that I do not even run a BTC full node myself (out of choice). Thus, I was yammering about software I wasn't even running, I had no skin in the game! Lo and behold -- my Blockstream smallblocker ally-on-the-panel, immediately admitted to everyone that he did not run a full node either. The only node-runner was the largeblocker. The audience found this very amusing (as did I).
>
> We smallblockers, justified our sinful nodeless behavior, as follows (paraphrasing): we receive BTC mainly from people that we know (and have a long-term relationship with); our receipts are not time sensitive; we are not paid in BTC that often; if payments turned out to be forged we would have enormous recourse against our counterparties; etc.
>
> We did not run full nodes, because we did not need to draw on the blockchain's powers, **for those transactions**.
>
> Which is my point: people are different, and transactions are different. I make many transactions today, with VISA or Venmo. These are not censorship-resistant, but somehow I survive the month, without bursting into flames.
>
> Wouldn't life be better, if we Bitcoiners could easily sweep those fiat transactions into *some* part of the BTC universe? (For example, a family of largeblock sidechains). To me the answer is clearly yes.
>
> Unlike layer1-largeblockism, no one running Bitcoin Core ever needs to see these 'btc' transactions (the same as we don't see them today, on account of them not existing at all); they do not burden Bitcoin Core full nodes. Hence why it seems like a good idea to me.
>
> An SPV-wallet-of-a-largeblock-sidechain, is of course, a *disgrace* compared to a full-node-of-smallblock-mainchain-Bitcoin-Core. But, it is emphatically superior to Venmo / VISA or even "custodial LN". And certainly superior to nothing.
>
> Paul
>
> * https://www.youtube.com/watch?v=V3cvH2eWqfU
>
>
[-- Attachment #2: Type: text/html, Size: 9495 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-03-04 12:35 ` Billy Tetrud
@ 2022-03-04 20:06 ` Paul Sztorc
0 siblings, 0 replies; 34+ messages in thread
From: Paul Sztorc @ 2022-03-04 20:06 UTC (permalink / raw)
To: Billy Tetrud; +Cc: Bitcoin Protocol Discussion
On 3/4/2022 7:35 AM, Billy Tetrud wrote:
>> sidechains cannot exist without their mainchain ...
>
> A sidechain could stop supporting deposits from or withdrawals to
> bitcoin and completely break any relationship with the main chain.
> I agree this is not as sure of a thing as starting with an altcoin
> (which of course never has that kind of relationship with bitcoin).
> So I do think there are some merits to sidechains in your scenario.
> However, I don't think its quite accurate to say it completely
> solves the problem (of a less-secure altcoin becoming dominant).
It is hard to see how this "sidechain cuts off the mainchain" scenario
could plausibly be in enough people's interest:
* Miners would lose the block subsidy (ie, the 6.25 BTC, or whatever of
it that still remains), and txn fees from the mainchain and all other
merged mined chains.
* Developers would lose the ability to create a dissenting new piece of
software (and would instead be forced into a permanent USSR-style "one
party system" intellectual monoculture).
* Users would lose --permanently-- the ability to take their coins to
new blockchains, removing almost all of their leverage.
Furthermore, because sidechains cannot exist without their parent (but
not vice-versa), we can expect a large permanent interest in keeping
mainchain node costs low. Aka: very small mainchain blocks forever. So,
the shut-it-down mainchain-haters, would have to meet the question "why
not just leave things the way they are?". And the cheaper the
mainchain-nodes are, the harder that question is to answer.
However, if a sidechain really were so overwhelmingly popular as to
clear all of these hurdles, then I would first want to understand why it
is so popular. Maybe it is a good thing and we should cheer it on.
> Your anecdote about not running a full node is amusing, and I've often
> found myself in that position. I certainly agree different people are
> different and so different trade offs can be better for different
> people. However, the question is: what tradeoffs does a largeblock
> sidechain do better than both eg Visa and lightning?
Yes, that's true. There are very many tradeoffs in general:
1. Onboarding
2. Route Capacity / Payment Limits
3. Failed Payments
4. Speed of Payment
5. Receive while offline / need for interaction/monitoring/watchtowers
6. Micropayments
7. Types of fees charged, and for what
8. Contribution to layer1 security budget
9. Auditability (re: large organizations) / general complexity
LN is certainly better for 4 and 6. But everything else is probably up
for grabs. And this is not intended to be an exhaustive list. I just
made it up now.
(And, if the layer2 is harmless, then its existence can be justified via
one single net benefit, for some users, somewhere on the tradeoff-list.)
Paul
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-24 12:03 ` ZmnSCPxj
2022-02-26 5:38 ` Billy Tetrud
@ 2022-02-26 6:00 ` Anthony Towns
1 sibling, 0 replies; 34+ messages in thread
From: Anthony Towns @ 2022-02-26 6:00 UTC (permalink / raw)
To: ZmnSCPxj, Bitcoin Protocol Discussion
On Thu, Feb 24, 2022 at 12:03:32PM +0000, ZmnSCPxj via bitcoin-dev wrote:
> > > Logically, if the construct is general enough to form Drivechains, and
> > > we rejected Drivechains, we should also reject the general construct.
> > Not providing X because it can only be used for E, may generalise to not
> > providing Y which can also only be used for E, but it doesn't necessarily
> > generalise to not providing Z which can be used for both G and E.
> Does this not work only if the original objection to merging in BIP-300 was of the form:
> * X implements E.
> * Z implements G and E.
> * Therefore, we should not merge in X and instead should merge in the more general construct Z.
I'd describe the "original objection" more as "E is not worth doing;
X achieves nothing but E; therefore we should not work on or merge X".
Whether we should work on or eventually merge some other construct that
does other things than E, depends on the (relative) merits of those
other things.
> I think we really need someone who NACKed BIP-300 to speak up.
Here's some posts from 2017:
] I think it's great that people want to experiment with things like
] drivechains/sidechains and what not, but their security model is very
] distinct from Bitcoin's and, given the current highly centralized
] mining ecosystem, arguably not very good. So positioning them as a
] major solution for the Bitcoin project is the wrong way to go. Instead
] we should support people trying cool stuff, at their own risk.
- https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-July/014726.html
] Regardless, people are free experiment and adopt such an approach. The
] nice thing about it not being a hardfork is that it does not require
] network-wide consensus to deploy. However, I don't think they offer a
] security model that should be encouraged, and thus doesn't have a
] place on a roadmap.
- https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-July/014729.html
> If my understanding is correct and that the original objection was "Drivechains are bad for reasons R[0], R[1]...", then:
> * You can have either of these two positions:
> * R[0], R[1] ... are specious arguments and Drivechains are not bad [...]
> * R[0], R[1] ... are valid arguments are Drivechains are bad, therefore we should **NOT** merge in a feature that implements Recursive Covenants [...]
> You cannot have it both ways.
I guess you mean to say that I've got to pick one, rather than can't
pick both. But in any event, I don't pick either; my view is more along
the lines of:
* drivechains shouldn't be used
* it's okay if other people think drivechains are worth using, and go
ahead and do so, if they're not creating a direct burden on everyone
else
That's the same position I hold for other things, like using lightning
on mainnet in January 2018; or giving your bitcoin to an anonymous
custodian so it it can be borrowed via a flash loan on some novel third
party smart contract platform.
> Admittedly, there may be some set of restrictions that prevent Turing-Completeness from implementing Drivechains, but you have to demonstrate a proof of that set of restrictions existing.
Like I said; I don't think the drivechains game theory works without
the implicit threat of miner censorship, and therefore you need a
"from_coinbase" flag as well as covenants. That's not a big objection,
though. (On the other hand, if I'm wrong and drivechains *do* work
without that threat; then drivechains don't cause a block size increase,
and can be safely ignored by miners and full node operators, and the
arguments against drivechains are specious; and implementing them purely
via covenants so miners aren't put in a privileged position seems an
improvement)
> > I think it's pretty reasonable to say:
> >
> > a) adding dedicated consensus features for drivechains is a bad idea
> > in the absence of widespread consensus that drivechains are likely
> > to work as designed and be a benefit to bitcoin overall
> >
> > b) if you want to risk your own funds by leaving your coins on an
> > exchange or using lightning or eltoo or tumbling/coinjoin or payment
> > pools or drivechains or being #reckless in some other way, and aren't
> > asking for consensus changes, that's your business
>
> *Shrug* I do not really see the distinction here --- in a world with Drivechains, you are free to not put your coins in a Drivechain-backed sidechain, too.
Well, yes: I'm saying there's no distinction between putting funds in
drivechains and other #reckless things you might do with your money?
My opinion is (a) we should be conservative about adding new consensus
features because of the maintenance cost; (b) we should design
consensus/policy in a way to encourage minimising the externality costs
users impose on each other; and (c) we should make it as easy as possible
to use bitcoin safely in general -- but if people *want* to be reckless,
even knowing the consequences, that's fine.
> (Admittedly, Drivechains does get into a Mutually Assured Destruction argument, so that may not hold.
> But if Drivechains going into a MAD argument is an objection, then I do not see why covenant-based Drivechains would also not get into the same MAD argument --- and if you want to avoid the MADness, you cannot support recursive covenants, either.
I think the argument you believe, but aren't quite actually making,
is along the lines of:
a) drivechain technology doen't just potentially harm people who use
them; it is an existential threat to bitcoin if used by anyone
b) therefore the ability for anyone to implement them must be prevented
c) (a) is well known and widely agreed upon by all reasonable
well-informed people
(b) is definitely a reasonable consequence of (a), but I don't agree
with (a). Drivechains have certainly been criticised as a bad idea,
but there are plenty of bad ideas that don't need to be outlawed.
But I think the simplest *method* of preventing drivechains from having
significant adoption is just "users encourage miners to steal funds
deposited into drivechains" (eg, by declining to do a UASF to prevent
such theft), which then obviously discourages people from putting funds
into drivechains. Since that can still be done even if bip300 or an
implementation of drivechains-via-covenants is deployed, I don't think
drivechains are an existential threat to bitcoin.
> Remember, 51% attackers can always censor the blockchain, regardless of whether you put the Drivechain commitments into the coinbase, or in an ostensibly-paid-by-somebody-else transaction.)
I think you could make the analogy between drivechains and covenants a
fair bit stronger in the following way:
The idea behind drivechains and the liquid sidechain is, in both cases,
that funds can be moved to some other blockchain with its own rules, and
then moved back to the bitcoin blockchain, via the assistance of some
group that will hopefully follow the stated rules of the sidechain. In
liquid's case it's a group of semi-known functionaries who are also
directly responsible for transactions appearing on the liquid sidechain,
and it's done by them signing via multisig. For bip300, it's bitcoin
miners, and done by putting entries in the coinbase.
But because just letting any miner alone immediately move funds
would be obviously too risky to consider, bip300 adds additional
restrictions, adding both multisig-like aspects, delays, and the ability
to back-out/correct a theft attempt before it's final, which provides
the opportunity for honest participants to react to miners attempting to
cheat and hopefully achieve a legitimate outcome instead. Whether that's
enough is still debatable -- but it's certainly an improvement to go from
"too risky to consider" to "debatable".
But the same incentive can apply to liquid too: it might be good to be
able to have liquid funds encumbered on the bitcoin blockchain in such a
way that it's even harder for people with liquid's private keys to cheat
than it currently is -- ie, it would be good to be able to specify more
"vault-like" behaviours for the liquid funds, perhaps in relation to the
"backup recovery keys" [0], eg.
As a result, while it's not obvious, I think it shouldn't be *surprising*
that the same technology that allows "vaults" also enables (something
like) drivechains -- since the goal in both cases is just constraining
how withdrawals work.
Cheers,
aj
[0] https://medium.com/blockstream/patching-the-liquid-timelock-issue-b4b2f5f9a973
^ permalink raw reply [flat|nested] 34+ messages in thread
end of thread, other threads:[~2022-03-04 20:06 UTC | newest]
Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-26 7:47 [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT Prayank
2022-02-26 16:18 ` Billy Tetrud
-- strict thread matches above, loose matches on Subject: below --
2022-02-14 2:40 Lucky Star
2022-01-26 17:20 [bitcoin-dev] " Russell O'Connor
2022-02-08 3:40 ` Rusty Russell
2022-02-08 4:34 ` Jeremy Rubin
2022-02-11 0:55 ` [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was " David A. Harding
2022-02-11 3:42 ` Jeremy Rubin
2022-02-11 17:42 ` James O'Beirne
2022-02-11 18:12 ` digital vagabond
2022-02-12 10:54 ` darosior
2022-02-12 15:59 ` Billy Tetrud
2022-02-17 15:15 ` Anthony Towns
2022-02-18 7:34 ` ZmnSCPxj
2022-02-23 11:28 ` ZmnSCPxj
2022-02-23 18:14 ` Paul Sztorc
2022-02-24 2:20 ` ZmnSCPxj
2022-02-24 6:53 ` Anthony Towns
2022-02-24 12:03 ` ZmnSCPxj
2022-02-26 5:38 ` Billy Tetrud
2022-02-26 6:43 ` ZmnSCPxj
2022-02-27 0:58 ` Paul Sztorc
2022-02-27 2:00 ` ZmnSCPxj
2022-02-27 7:25 ` ZmnSCPxj
2022-02-27 16:59 ` Billy Tetrud
2022-02-27 23:50 ` Paul Sztorc
2022-02-28 0:20 ` Paul Sztorc
2022-02-28 6:49 ` ZmnSCPxj
2022-02-28 7:55 ` vjudeu
2022-03-04 8:42 ` ZmnSCPxj
2022-03-04 13:43 ` vjudeu
2022-02-28 22:54 ` Paul Sztorc
2022-03-01 5:39 ` Billy Tetrud
2022-03-02 0:00 ` Paul Sztorc
2022-03-04 12:35 ` Billy Tetrud
2022-03-04 20:06 ` Paul Sztorc
2022-02-26 6:00 ` Anthony Towns
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox