* [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
@ 2022-01-26 17:20 Russell O'Connor
2022-01-26 22:16 ` Jeremy
` (2 more replies)
0 siblings, 3 replies; 58+ messages in thread
From: Russell O'Connor @ 2022-01-26 17:20 UTC (permalink / raw)
To: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 9132 bytes --]
Recapping the relationship between CTV and ANYPREVOUT::
It is known that there is a significant amount of overlap in the
applications that are enabled by the CTV and ANYPREVOUT proposals despite
the fact that their primary applications (congestion control for CTV and
eltoo lightning channels for ANYPREVOUT) are quite distinct.
In particular, ANYPREVOUT can enable most of the applications of CTV,
albeit with a higher cost. The primary functionality of CTV is to allow a
scriptPubKey to make a commitment to its spending transaction's hash with
the input's TXID excluded from the hash. This exclusion is necessary
because the scriptPubKey is hashed into the input's TXID, and including the
TXID would cause a cycle of hash commitments, which is impossible to
construct. On the other hand, ANYPREVOUT defines a signature hash mode
that similarly excludes the inputs TXID for its purpose of rebindable
signatures.
This means that ANYPREVOUT can mimic most of the properties of CTV by
committing both a public key along with an ANYPREVOUT signature inside
scriptPubKey. In fact, the only reason Bitcoin doesn't have covenants
today is due to this cycle between scriptPubKeys and the TXIDs that occur
in all the sighash modes.
The major differences between simulating CTV via ANYPREVOUT and the actual
CTV proposal is: (1) The cost of simulating CTV. With CTV the spending
transaction is committed using a hash of 32 bytes, while simulating it with
ANYPREVOUT requires 64 bytes for a signature, and 32 bytes for some public
key, plus a few more bytes for various flags. Some of that cost could be
reduced by using the inner public key (1 byte representation) and, if we
had CAT, maybe by assembling the signature from reusable pieces (i.e.
setting the nonce of the commited signature equal to the public key).
The other major difference is: (2) CTV's transaction hash covers values
such as the number of inputs in the transaction and their sequence numbers,
which ANYPREVOUT does not cover. CTV's hash contains enough information so
that when combined with the missing TXIDs, you can compute the TXID of the
spending transaction. In particular if the number of inputs is committed
to being 1, once the scriptpubkey's transaction id is known and committed
to the blockchain, the TXID of its spending transaction is deducible. And
if that transaction has outputs that have CTV commitments in them, you can
deduce their spending TXIDs in turn. While this is a pretty neat feature,
something that ANYPREVOUT cannot mimic, the main application for it is
listed as using congestion control to fund lightning channels, fixing their
TXIDs in advance of them being placed on chain. However, if ANYPREVOUT
were used to mimic CTV, then likely it would be eltoo channels that would
be funded, and it isn't necessary to know the TXIDs of eltoo channels in
advance in order to use them.
An Alternative Proposal::
Given the overlap in functionality between CTV and ANYPREVOUT, I think it
makes sense to decompose their operations into their constituent pieces and
reassemble their behaviour programmatically. To this end, I'd like to
instead propose OP_TXHASH and OP_CHECKSIGFROMSTACKVERIFY.
OP_TXHASH would pop a txhash flag from the stack and compute a (tagged)
txhash in accordance with that flag, and push the resulting hash onto the
stack.
OP_CHECKSIGFROMSTACKVERIFY would pop a pubkey, message, and signature from
the stack and fail if the signature does not verify on that message.
CTV and TXHASH have roughly equivalent functionality. 'CTV DROP' can be
simulated by '<ctv_style_flag> TXHASH EQUALVERIFY'. The reverse is also
true where '<ctv_style_flag> TXHASH' can be simulated by CTV by
'<ctv-result-from-witness-stack> CTV', however, as you can see, simulating
TXHASH from CTV is much more expensive than the other way around, because
the resulting 32-byte hash result must be included as part of the witness
stack.
'<anyprevout-pubkey> CHECKSIGVERIFY can be simulated by '<apo_style_flag>
TXHASH <pubkey> CHECKSIGFROMSTACKVERIFY'. Here we see the advantage of
pushing the hash value onto the stack. APO can be simulated without
needing to include a copy of the resulting txhash inside the witness data.
In addition to the CTV and ANYPREVOUT applications, with
CHECKSIGFROMSTACKVERIFY we can verify signatures on arbitrary messages
signed by oracles for oracle applications. This is where we see the
benefit of decomposing operations into primitive pieces. By giving users
the ability to program their own use cases from components, we get more
applications out of fewer op codes!
Caveats::
First, I acknowledge that replicating the behaviour of CTV and ANYPREVOUT
does cost a few more bytes than using the custom purpose built proposals
themselves. That is the price to be paid when we choose the ability to
program solutions from pieces. But we get to reap the advantages of being
able to build more applications from these pieces.
Unlike CTV, TXHASH is not NOP-compatable and can only be implemented within
tapscript. In particular, bare CTV isn't possible with this proposal.
However, this proposal doesn't preclude the possibility of having CTV added
to legacy script in while having TXHASH added to tapscript.
For similar reasons, TXHASH is not amenable to extending the set of txflags
at a later date. In theory, one could have TXHASH abort-with-success when
encountering an unknown set of flags. However, this would make analyzing
tapscript much more difficult. Tapscripts would then be able to abort with
success or failure depending on the order script fragments are assembled
and executed, and getting the order incorrect would be catastrophic. This
behavior is manifestly different from the current batch of OP_SUCCESS
opcodes that abort-with-success just by their mere presence, whether they
would be executed or not.
I believe the difficulties with upgrading TXHASH can be mitigated by
designing a robust set of TXHASH flags from the start. For example having
bits to control whether (1) the version is covered; (2) the locktime is
covered; (3) txids are covered; (4) sequence numbers are covered; (5) input
amounts are covered; (6) input scriptpubkeys are covered; (7) number of
inputs is covered; (8) output amounts are covered; (9) output scriptpubkeys
are covered; (10) number of outputs is covered; (11) the tapbranch is
covered; (12) the tapleaf is covered; (13) the opseparator value is
covered; (14) whether all, one, or no inputs are covered; (15) whether all,
one or no outputs are covered; (16) whether the one input position is
covered; (17) whether the one output position is covered; (18) whether the
sighash flags are covered or not (note: whether or not the sighash flags
are or are not covered must itself be covered). Possibly specifying which
input or output position is covered in the single case and whether the
position is relative to the input's position or is an absolute position.
That all said, even if other txhash flag modes are needed in the future,
adding TXHASH2 always remains an option.
Interactions with potential future opcodes::
We should give some consideration on how these opcodes may interact with
future opcodes such as CAT, rolling SHA256 opcodes, or how it might
interface with other covenant opcodes that may do things like, directly
push input or output amounts onto the stack for computation purposes,
opcodes which have been added to the Elements project.
With CAT and/or rolling SHA256 opcodes and/or existing SHA256 opcodes, the
CHECKSIGFROMSTACKVERIFY could verify signatures on programmatically
assembled messages. Also, in combination with multiple calls to TXHASH,
could be used to create signatures that commit to complex subsets of
transaction data.
If new opcodes are added to push parts of the transaction data direction
onto the stack, e.g. OP_INSPECTOUTPUTVALUE, there is perhaps concern that
they would obsolete TXHASH, since, in the presence of rolling SHA256
opcodes, TXHASH could be simulated. However, given that TXHASH can
compactly create a hash of large portions of transaction data, it seems
unlikely that TXHASH would fall into disuse. Also, a combination of TXHASH
and transaction introspection opcodes can be used to build "*subtractive
covenants*".
The usual way of building a covenant, which we will call "*additive *
*covenants*", is to push all the parts of the transaction data you would
like to fix onto the stack, hash it all together, and verify the resulting
hash matches a fixed value. Another way of building covenants, which we
will call "*subtractive covenants*", is to push all the parts of the
transaction data you would like to remain free onto the stack. Then use
rolling SHA256 opcodes starting from a fixed midstate that commits to a
prefix of the transaction hash data. The free parts are hashed into that
midstate. Finally, the resulting hash value is verified to match a value
returned by TXHASH. The ability to nicely build subtractive covenants
depends on the details of how the TXHASH hash value is constructed,
something that I'm told CTV has given consideration to.
[-- Attachment #2: Type: text/html, Size: 10040 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-01-26 17:20 [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT Russell O'Connor
@ 2022-01-26 22:16 ` Jeremy
2022-01-27 4:20 ` James Lu
2022-01-27 19:16 ` Russell O'Connor
2022-01-28 1:34 ` Anthony Towns
2022-02-08 3:40 ` Rusty Russell
2 siblings, 2 replies; 58+ messages in thread
From: Jeremy @ 2022-01-26 22:16 UTC (permalink / raw)
To: Russell O'Connor, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 15749 bytes --]
Hi Russell,
Thanks for this email, it's great to see this approach described.
A few preliminary notes of feedback:
1) a Verify approach can be made to work for OP_TXHASH (even with CTV
as-is) E.g., suppose a semantic added for a single byte stack[-1] sighash
flag to read the hash at stack[-2], then the hash can be passed in instead
of put on the stack. This has the disadvantage of larger witnesses, but the
advantage of allowing undefined sighash flags to pass for any hash type.
2) using the internal key for APO covenants is not an option because it
makes transaction construction interactive and precludes contracts with a
NUMS point taproot key. Instead, if you want similar savings, you should
advocate an OP_GENERATOR which puts G on the stack. Further, an untagged
APO variant which has split R and S values would permit something like
<sig> OP_GENERATOR OP_GENERATOR CHECKSIGAPO, which would be only 2 more
bytes than CTV.
3) I count something like 20 different flags in your proposal. As long as
flags are under 40 bytes (and 32 assuming we want it to be easy) without
upgrading math this should be feasible to manipulate on the stack
programmatically. This is ignoring some of the more flexible additions you
mention about picking which outputs/inputs are included. However, 20 flags
means that for testing we would want comprehensive tests and understanding
for ~1 million different flag combos and the behaviors they expose. I think
this necessitates a formal model of scripting and transaction validity
properties. Are there any combinations that might be undesirable?
4) Just hashing or not hashing isn't actually that flexible, because it
doesn't natively let you do things like (for example) TLUV. You really do
need tx operations for directly manipulating the data on the stack to
construct the hash if you want more flexible covenants. This happens to be
compatible with either a Verify or Push approach, since you either
destructure a pushed hash or build up a hash for a verify.
5) Flexible hashing has the potential for quadratic hashing bugs. The
fields you propose seem to be within similar range to work you could cause
with a regular OP_HASH256, although you'd want to be careful with some of
the proposed extensions that you don't create risk of quadratic hashing,
which seems possible with an output selecting opcode unless you cache
properly (which might be tricky to do). Overall for the fields explicitly
mentioned, seems safe, the "possibles" seem to have some more complex
interactions. E.g., CTV with the ability to pick a subset of outputs would
be exposed to quadratic hashing.
6) Missing field: covering the annex or some sub-range of the annex
(quadratic hashing issues on the latter)
7) It seems simpler to, for many of these fields, push values directly (as
in OP_PUSHTXDATA from Johnson Lau) because the combo of flags to push the
hash of a single output's amount to emulate OP_AMOUNT looks 'general but
annoying'. It may make more sense to do the OP_PUSHTXDATA style opcode
instead. This also makes it simpler to think about the combinations of
flags, since it's really N independent multi-byte opcodes.
Ultimately if we had OP_TXHASH available "tomorrow", I would be able to
build out the use cases I care about for CTV (and more). So I don't have an
opposition on it with regards to lack of function.
However, if one finds the TXHASH approach acceptable, then you should also
be relatively fine doing APO, CTV, CSFS, TXHASH acceptable in any order
(whenever "ready"), unless you are particularly sensitive to "technical
debt" and "soft fork processes". The only costs of doing something for CTV
or APO given an eventual TXHASH is perhaps a wasted key version or the 32
byte argument of a NOP opcode and some code to maintain.
Are there other costs I am missing?
However, as it pertains to actual rollout:
- OP_TXHASH+CSFSV doesn't seem to be the "full" set of things needed (we
still need e.g. OP_CAT, Upgraded >=64 bit Math, TLUV or OP_TWEAK
OP_TAPBRANCH OP_MANIPULATETAPTREE, and more) to full realize covenanting
power it intends to introduce.
- What sort of timeline would it take to ready something like TXHASH (and
desired friends) given greater scope of testing and analysis (standalone +
compared to CTV)?
- Is there opposition from the community to this degree of
general/recursive covenants?
- Does it make "more sense" to invest the research and development effort
that would go into proving TXHASH safe, for example, into Simplicity
instead?
Overall, *my opinion *is that:
- TXHASH is an acceptable theoretical approach, and I am happy to put more
thought into it and maybe draft a prototype of it.
- I prefer CTV as a first step for pragmatic engineering and availability
timeline reasons.
- If TXHASH were to take, optimistically, 2 years to develop and review,
and then 1 year to activate, the "path dependence of software" would put
Bitcoin in a much better place were we to have CTV within 1 year and
applications (that are to be a subset of TXHASH later) being built over the
next few years enhanced in the future by TXHASH's availability.
- There is an element of expediency meritted for something like CTV insofar
as it provides primitives to tackle time sensitive issues around privacy,
scalability, self custody, and decentralization. The aforementioned
properties may be difficult to reclaim once given away (with the exception
of perhaps scalability).
- Bringing CTV to an implemented state of near-unanimous "we could do this,
technically" is good for concretely driving the process of review for any
covenant proposals forward, irrespective of if we ultimately activate.
(I.e., if there were a reason we could not do CTV safely, it would likely
have implications for any other future covenant)
Concretely, I'm not going to stop advocating for CTV based on the above,
but I'm very happy to have something new in the mix to consider!
Best,
Jeremy
--
@JeremyRubin <https://twitter.com/JeremyRubin>
<https://twitter.com/JeremyRubin>
On Wed, Jan 26, 2022 at 9:23 AM Russell O'Connor via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> Recapping the relationship between CTV and ANYPREVOUT::
>
> It is known that there is a significant amount of overlap in the
> applications that are enabled by the CTV and ANYPREVOUT proposals despite
> the fact that their primary applications (congestion control for CTV and
> eltoo lightning channels for ANYPREVOUT) are quite distinct.
> In particular, ANYPREVOUT can enable most of the applications of CTV,
> albeit with a higher cost. The primary functionality of CTV is to allow a
> scriptPubKey to make a commitment to its spending transaction's hash with
> the input's TXID excluded from the hash. This exclusion is necessary
> because the scriptPubKey is hashed into the input's TXID, and including the
> TXID would cause a cycle of hash commitments, which is impossible to
> construct. On the other hand, ANYPREVOUT defines a signature hash mode
> that similarly excludes the inputs TXID for its purpose of rebindable
> signatures.
>
> This means that ANYPREVOUT can mimic most of the properties of CTV by
> committing both a public key along with an ANYPREVOUT signature inside
> scriptPubKey. In fact, the only reason Bitcoin doesn't have covenants
> today is due to this cycle between scriptPubKeys and the TXIDs that occur
> in all the sighash modes.
>
> The major differences between simulating CTV via ANYPREVOUT and the actual
> CTV proposal is: (1) The cost of simulating CTV. With CTV the spending
> transaction is committed using a hash of 32 bytes, while simulating it with
> ANYPREVOUT requires 64 bytes for a signature, and 32 bytes for some public
> key, plus a few more bytes for various flags. Some of that cost could be
> reduced by using the inner public key (1 byte representation) and, if we
> had CAT, maybe by assembling the signature from reusable pieces (i.e.
> setting the nonce of the commited signature equal to the public key).
>
> The other major difference is: (2) CTV's transaction hash covers values
> such as the number of inputs in the transaction and their sequence numbers,
> which ANYPREVOUT does not cover. CTV's hash contains enough information so
> that when combined with the missing TXIDs, you can compute the TXID of the
> spending transaction. In particular if the number of inputs is committed
> to being 1, once the scriptpubkey's transaction id is known and committed
> to the blockchain, the TXID of its spending transaction is deducible. And
> if that transaction has outputs that have CTV commitments in them, you can
> deduce their spending TXIDs in turn. While this is a pretty neat feature,
> something that ANYPREVOUT cannot mimic, the main application for it is
> listed as using congestion control to fund lightning channels, fixing their
> TXIDs in advance of them being placed on chain. However, if ANYPREVOUT
> were used to mimic CTV, then likely it would be eltoo channels that would
> be funded, and it isn't necessary to know the TXIDs of eltoo channels in
> advance in order to use them.
>
>
>
> An Alternative Proposal::
>
> Given the overlap in functionality between CTV and ANYPREVOUT, I think it
> makes sense to decompose their operations into their constituent pieces and
> reassemble their behaviour programmatically. To this end, I'd like to
> instead propose OP_TXHASH and OP_CHECKSIGFROMSTACKVERIFY.
>
> OP_TXHASH would pop a txhash flag from the stack and compute a (tagged)
> txhash in accordance with that flag, and push the resulting hash onto the
> stack.
> OP_CHECKSIGFROMSTACKVERIFY would pop a pubkey, message, and signature from
> the stack and fail if the signature does not verify on that message.
>
> CTV and TXHASH have roughly equivalent functionality. 'CTV DROP' can be
> simulated by '<ctv_style_flag> TXHASH EQUALVERIFY'. The reverse is also
> true where '<ctv_style_flag> TXHASH' can be simulated by CTV by
> '<ctv-result-from-witness-stack> CTV', however, as you can see, simulating
> TXHASH from CTV is much more expensive than the other way around, because
> the resulting 32-byte hash result must be included as part of the witness
> stack.
>
> '<anyprevout-pubkey> CHECKSIGVERIFY can be simulated by '<apo_style_flag>
> TXHASH <pubkey> CHECKSIGFROMSTACKVERIFY'. Here we see the advantage of
> pushing the hash value onto the stack. APO can be simulated without
> needing to include a copy of the resulting txhash inside the witness data.
>
> In addition to the CTV and ANYPREVOUT applications, with
> CHECKSIGFROMSTACKVERIFY we can verify signatures on arbitrary messages
> signed by oracles for oracle applications. This is where we see the
> benefit of decomposing operations into primitive pieces. By giving users
> the ability to program their own use cases from components, we get more
> applications out of fewer op codes!
>
>
>
> Caveats::
>
> First, I acknowledge that replicating the behaviour of CTV and ANYPREVOUT
> does cost a few more bytes than using the custom purpose built proposals
> themselves. That is the price to be paid when we choose the ability to
> program solutions from pieces. But we get to reap the advantages of being
> able to build more applications from these pieces.
>
> Unlike CTV, TXHASH is not NOP-compatable and can only be implemented
> within tapscript. In particular, bare CTV isn't possible with this
> proposal. However, this proposal doesn't preclude the possibility of
> having CTV added to legacy script in while having TXHASH added to tapscript.
>
> For similar reasons, TXHASH is not amenable to extending the set of
> txflags at a later date. In theory, one could have TXHASH
> abort-with-success when encountering an unknown set of flags. However,
> this would make analyzing tapscript much more difficult. Tapscripts would
> then be able to abort with success or failure depending on the order script
> fragments are assembled and executed, and getting the order incorrect would
> be catastrophic. This behavior is manifestly different from the current
> batch of OP_SUCCESS opcodes that abort-with-success just by their mere
> presence, whether they would be executed or not.
>
> I believe the difficulties with upgrading TXHASH can be mitigated by
> designing a robust set of TXHASH flags from the start. For example having
> bits to control whether (1) the version is covered; (2) the locktime is
> covered; (3) txids are covered; (4) sequence numbers are covered; (5) input
> amounts are covered; (6) input scriptpubkeys are covered; (7) number of
> inputs is covered; (8) output amounts are covered; (9) output scriptpubkeys
> are covered; (10) number of outputs is covered; (11) the tapbranch is
> covered; (12) the tapleaf is covered; (13) the opseparator value is
> covered; (14) whether all, one, or no inputs are covered; (15) whether all,
> one or no outputs are covered; (16) whether the one input position is
> covered; (17) whether the one output position is covered; (18) whether the
> sighash flags are covered or not (note: whether or not the sighash flags
> are or are not covered must itself be covered). Possibly specifying which
> input or output position is covered in the single case and whether the
> position is relative to the input's position or is an absolute position.
>
> That all said, even if other txhash flag modes are needed in the future,
> adding TXHASH2 always remains an option.
>
>
>
> Interactions with potential future opcodes::
>
> We should give some consideration on how these opcodes may interact with
> future opcodes such as CAT, rolling SHA256 opcodes, or how it might
> interface with other covenant opcodes that may do things like, directly
> push input or output amounts onto the stack for computation purposes,
> opcodes which have been added to the Elements project.
>
> With CAT and/or rolling SHA256 opcodes and/or existing SHA256 opcodes, the
> CHECKSIGFROMSTACKVERIFY could verify signatures on programmatically
> assembled messages. Also, in combination with multiple calls to TXHASH,
> could be used to create signatures that commit to complex subsets of
> transaction data.
>
> If new opcodes are added to push parts of the transaction data direction
> onto the stack, e.g. OP_INSPECTOUTPUTVALUE, there is perhaps concern that
> they would obsolete TXHASH, since, in the presence of rolling SHA256
> opcodes, TXHASH could be simulated. However, given that TXHASH can
> compactly create a hash of large portions of transaction data, it seems
> unlikely that TXHASH would fall into disuse. Also, a combination of TXHASH
> and transaction introspection opcodes can be used to build "*subtractive
> covenants*".
>
> The usual way of building a covenant, which we will call "*additive *
> *covenants*", is to push all the parts of the transaction data you would
> like to fix onto the stack, hash it all together, and verify the resulting
> hash matches a fixed value. Another way of building covenants, which we
> will call "*subtractive covenants*", is to push all the parts of the
> transaction data you would like to remain free onto the stack. Then use
> rolling SHA256 opcodes starting from a fixed midstate that commits to a
> prefix of the transaction hash data. The free parts are hashed into that
> midstate. Finally, the resulting hash value is verified to match a value
> returned by TXHASH. The ability to nicely build subtractive covenants
> depends on the details of how the TXHASH hash value is constructed,
> something that I'm told CTV has given consideration to.
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 21902 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-01-26 22:16 ` Jeremy
@ 2022-01-27 4:20 ` James Lu
2022-01-27 19:16 ` Russell O'Connor
1 sibling, 0 replies; 58+ messages in thread
From: James Lu @ 2022-01-27 4:20 UTC (permalink / raw)
To: Bitcoin Protocol Discussion, Jeremy
[-- Attachment #1: Type: text/plain, Size: 16504 bytes --]
What if OP_TXHASH is a no op except for the purpose of emulating CTV and
APO?
On Wed, Jan 26, 2022 at 5:16 PM Jeremy via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> Hi Russell,
>
> Thanks for this email, it's great to see this approach described.
>
> A few preliminary notes of feedback:
>
> 1) a Verify approach can be made to work for OP_TXHASH (even with CTV
> as-is) E.g., suppose a semantic added for a single byte stack[-1] sighash
> flag to read the hash at stack[-2], then the hash can be passed in instead
> of put on the stack. This has the disadvantage of larger witnesses, but the
> advantage of allowing undefined sighash flags to pass for any hash type.
> 2) using the internal key for APO covenants is not an option because it
> makes transaction construction interactive and precludes contracts with a
> NUMS point taproot key. Instead, if you want similar savings, you should
> advocate an OP_GENERATOR which puts G on the stack. Further, an untagged
> APO variant which has split R and S values would permit something like
> <sig> OP_GENERATOR OP_GENERATOR CHECKSIGAPO, which would be only 2 more
> bytes than CTV.
> 3) I count something like 20 different flags in your proposal. As long as
> flags are under 40 bytes (and 32 assuming we want it to be easy) without
> upgrading math this should be feasible to manipulate on the stack
> programmatically. This is ignoring some of the more flexible additions you
> mention about picking which outputs/inputs are included. However, 20 flags
> means that for testing we would want comprehensive tests and understanding
> for ~1 million different flag combos and the behaviors they expose. I think
> this necessitates a formal model of scripting and transaction validity
> properties. Are there any combinations that might be undesirable?
> 4) Just hashing or not hashing isn't actually that flexible, because it
> doesn't natively let you do things like (for example) TLUV. You really do
> need tx operations for directly manipulating the data on the stack to
> construct the hash if you want more flexible covenants. This happens to be
> compatible with either a Verify or Push approach, since you either
> destructure a pushed hash or build up a hash for a verify.
> 5) Flexible hashing has the potential for quadratic hashing bugs. The
> fields you propose seem to be within similar range to work you could cause
> with a regular OP_HASH256, although you'd want to be careful with some of
> the proposed extensions that you don't create risk of quadratic hashing,
> which seems possible with an output selecting opcode unless you cache
> properly (which might be tricky to do). Overall for the fields explicitly
> mentioned, seems safe, the "possibles" seem to have some more complex
> interactions. E.g., CTV with the ability to pick a subset of outputs would
> be exposed to quadratic hashing.
> 6) Missing field: covering the annex or some sub-range of the annex
> (quadratic hashing issues on the latter)
> 7) It seems simpler to, for many of these fields, push values directly (as
> in OP_PUSHTXDATA from Johnson Lau) because the combo of flags to push the
> hash of a single output's amount to emulate OP_AMOUNT looks 'general but
> annoying'. It may make more sense to do the OP_PUSHTXDATA style opcode
> instead. This also makes it simpler to think about the combinations of
> flags, since it's really N independent multi-byte opcodes.
>
>
> Ultimately if we had OP_TXHASH available "tomorrow", I would be able to
> build out the use cases I care about for CTV (and more). So I don't have an
> opposition on it with regards to lack of function.
>
> However, if one finds the TXHASH approach acceptable, then you should also
> be relatively fine doing APO, CTV, CSFS, TXHASH acceptable in any order
> (whenever "ready"), unless you are particularly sensitive to "technical
> debt" and "soft fork processes". The only costs of doing something for CTV
> or APO given an eventual TXHASH is perhaps a wasted key version or the 32
> byte argument of a NOP opcode and some code to maintain.
>
> Are there other costs I am missing?
>
> However, as it pertains to actual rollout:
>
> - OP_TXHASH+CSFSV doesn't seem to be the "full" set of things needed (we
> still need e.g. OP_CAT, Upgraded >=64 bit Math, TLUV or OP_TWEAK
> OP_TAPBRANCH OP_MANIPULATETAPTREE, and more) to full realize covenanting
> power it intends to introduce.
> - What sort of timeline would it take to ready something like TXHASH (and
> desired friends) given greater scope of testing and analysis (standalone +
> compared to CTV)?
> - Is there opposition from the community to this degree of
> general/recursive covenants?
> - Does it make "more sense" to invest the research and development effort
> that would go into proving TXHASH safe, for example, into Simplicity
> instead?
>
> Overall, *my opinion *is that:
>
> - TXHASH is an acceptable theoretical approach, and I am happy to put more
> thought into it and maybe draft a prototype of it.
> - I prefer CTV as a first step for pragmatic engineering and availability
> timeline reasons.
> - If TXHASH were to take, optimistically, 2 years to develop and review,
> and then 1 year to activate, the "path dependence of software" would put
> Bitcoin in a much better place were we to have CTV within 1 year and
> applications (that are to be a subset of TXHASH later) being built over the
> next few years enhanced in the future by TXHASH's availability.
> - There is an element of expediency meritted for something like CTV
> insofar as it provides primitives to tackle time sensitive issues around
> privacy, scalability, self custody, and decentralization. The
> aforementioned properties may be difficult to reclaim once given away (with
> the exception of perhaps scalability).
> - Bringing CTV to an implemented state of near-unanimous "we could do
> this, technically" is good for concretely driving the process of review for
> any covenant proposals forward, irrespective of if we ultimately activate.
> (I.e., if there were a reason we could not do CTV safely, it would likely
> have implications for any other future covenant)
>
> Concretely, I'm not going to stop advocating for CTV based on the above,
> but I'm very happy to have something new in the mix to consider!
>
> Best,
>
> Jeremy
>
>
> --
> @JeremyRubin <https://twitter.com/JeremyRubin>
> <https://twitter.com/JeremyRubin>
>
>
> On Wed, Jan 26, 2022 at 9:23 AM Russell O'Connor via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Recapping the relationship between CTV and ANYPREVOUT::
>>
>> It is known that there is a significant amount of overlap in the
>> applications that are enabled by the CTV and ANYPREVOUT proposals despite
>> the fact that their primary applications (congestion control for CTV and
>> eltoo lightning channels for ANYPREVOUT) are quite distinct.
>> In particular, ANYPREVOUT can enable most of the applications of CTV,
>> albeit with a higher cost. The primary functionality of CTV is to allow a
>> scriptPubKey to make a commitment to its spending transaction's hash with
>> the input's TXID excluded from the hash. This exclusion is necessary
>> because the scriptPubKey is hashed into the input's TXID, and including the
>> TXID would cause a cycle of hash commitments, which is impossible to
>> construct. On the other hand, ANYPREVOUT defines a signature hash mode
>> that similarly excludes the inputs TXID for its purpose of rebindable
>> signatures.
>>
>> This means that ANYPREVOUT can mimic most of the properties of CTV by
>> committing both a public key along with an ANYPREVOUT signature inside
>> scriptPubKey. In fact, the only reason Bitcoin doesn't have covenants
>> today is due to this cycle between scriptPubKeys and the TXIDs that occur
>> in all the sighash modes.
>>
>> The major differences between simulating CTV via ANYPREVOUT and the
>> actual CTV proposal is: (1) The cost of simulating CTV. With CTV the
>> spending transaction is committed using a hash of 32 bytes, while
>> simulating it with ANYPREVOUT requires 64 bytes for a signature, and 32
>> bytes for some public key, plus a few more bytes for various flags. Some
>> of that cost could be reduced by using the inner public key (1 byte
>> representation) and, if we had CAT, maybe by assembling the signature from
>> reusable pieces (i.e. setting the nonce of the commited signature equal to
>> the public key).
>>
>> The other major difference is: (2) CTV's transaction hash covers values
>> such as the number of inputs in the transaction and their sequence numbers,
>> which ANYPREVOUT does not cover. CTV's hash contains enough information so
>> that when combined with the missing TXIDs, you can compute the TXID of the
>> spending transaction. In particular if the number of inputs is committed
>> to being 1, once the scriptpubkey's transaction id is known and committed
>> to the blockchain, the TXID of its spending transaction is deducible. And
>> if that transaction has outputs that have CTV commitments in them, you can
>> deduce their spending TXIDs in turn. While this is a pretty neat feature,
>> something that ANYPREVOUT cannot mimic, the main application for it is
>> listed as using congestion control to fund lightning channels, fixing their
>> TXIDs in advance of them being placed on chain. However, if ANYPREVOUT
>> were used to mimic CTV, then likely it would be eltoo channels that would
>> be funded, and it isn't necessary to know the TXIDs of eltoo channels in
>> advance in order to use them.
>>
>>
>>
>> An Alternative Proposal::
>>
>> Given the overlap in functionality between CTV and ANYPREVOUT, I think it
>> makes sense to decompose their operations into their constituent pieces and
>> reassemble their behaviour programmatically. To this end, I'd like to
>> instead propose OP_TXHASH and OP_CHECKSIGFROMSTACKVERIFY.
>>
>> OP_TXHASH would pop a txhash flag from the stack and compute a (tagged)
>> txhash in accordance with that flag, and push the resulting hash onto the
>> stack.
>> OP_CHECKSIGFROMSTACKVERIFY would pop a pubkey, message, and signature
>> from the stack and fail if the signature does not verify on that message.
>>
>> CTV and TXHASH have roughly equivalent functionality. 'CTV DROP' can be
>> simulated by '<ctv_style_flag> TXHASH EQUALVERIFY'. The reverse is also
>> true where '<ctv_style_flag> TXHASH' can be simulated by CTV by
>> '<ctv-result-from-witness-stack> CTV', however, as you can see, simulating
>> TXHASH from CTV is much more expensive than the other way around, because
>> the resulting 32-byte hash result must be included as part of the witness
>> stack.
>>
>> '<anyprevout-pubkey> CHECKSIGVERIFY can be simulated by '<apo_style_flag>
>> TXHASH <pubkey> CHECKSIGFROMSTACKVERIFY'. Here we see the advantage of
>> pushing the hash value onto the stack. APO can be simulated without
>> needing to include a copy of the resulting txhash inside the witness data.
>>
>> In addition to the CTV and ANYPREVOUT applications, with
>> CHECKSIGFROMSTACKVERIFY we can verify signatures on arbitrary messages
>> signed by oracles for oracle applications. This is where we see the
>> benefit of decomposing operations into primitive pieces. By giving users
>> the ability to program their own use cases from components, we get more
>> applications out of fewer op codes!
>>
>>
>>
>> Caveats::
>>
>> First, I acknowledge that replicating the behaviour of CTV and ANYPREVOUT
>> does cost a few more bytes than using the custom purpose built proposals
>> themselves. That is the price to be paid when we choose the ability to
>> program solutions from pieces. But we get to reap the advantages of being
>> able to build more applications from these pieces.
>>
>> Unlike CTV, TXHASH is not NOP-compatable and can only be implemented
>> within tapscript. In particular, bare CTV isn't possible with this
>> proposal. However, this proposal doesn't preclude the possibility of
>> having CTV added to legacy script in while having TXHASH added to tapscript.
>>
>> For similar reasons, TXHASH is not amenable to extending the set of
>> txflags at a later date. In theory, one could have TXHASH
>> abort-with-success when encountering an unknown set of flags. However,
>> this would make analyzing tapscript much more difficult. Tapscripts would
>> then be able to abort with success or failure depending on the order script
>> fragments are assembled and executed, and getting the order incorrect would
>> be catastrophic. This behavior is manifestly different from the current
>> batch of OP_SUCCESS opcodes that abort-with-success just by their mere
>> presence, whether they would be executed or not.
>>
>> I believe the difficulties with upgrading TXHASH can be mitigated by
>> designing a robust set of TXHASH flags from the start. For example having
>> bits to control whether (1) the version is covered; (2) the locktime is
>> covered; (3) txids are covered; (4) sequence numbers are covered; (5) input
>> amounts are covered; (6) input scriptpubkeys are covered; (7) number of
>> inputs is covered; (8) output amounts are covered; (9) output scriptpubkeys
>> are covered; (10) number of outputs is covered; (11) the tapbranch is
>> covered; (12) the tapleaf is covered; (13) the opseparator value is
>> covered; (14) whether all, one, or no inputs are covered; (15) whether all,
>> one or no outputs are covered; (16) whether the one input position is
>> covered; (17) whether the one output position is covered; (18) whether the
>> sighash flags are covered or not (note: whether or not the sighash flags
>> are or are not covered must itself be covered). Possibly specifying which
>> input or output position is covered in the single case and whether the
>> position is relative to the input's position or is an absolute position.
>>
>> That all said, even if other txhash flag modes are needed in the future,
>> adding TXHASH2 always remains an option.
>>
>>
>>
>> Interactions with potential future opcodes::
>>
>> We should give some consideration on how these opcodes may interact with
>> future opcodes such as CAT, rolling SHA256 opcodes, or how it might
>> interface with other covenant opcodes that may do things like, directly
>> push input or output amounts onto the stack for computation purposes,
>> opcodes which have been added to the Elements project.
>>
>> With CAT and/or rolling SHA256 opcodes and/or existing SHA256 opcodes,
>> the CHECKSIGFROMSTACKVERIFY could verify signatures on programmatically
>> assembled messages. Also, in combination with multiple calls to TXHASH,
>> could be used to create signatures that commit to complex subsets of
>> transaction data.
>>
>> If new opcodes are added to push parts of the transaction data direction
>> onto the stack, e.g. OP_INSPECTOUTPUTVALUE, there is perhaps concern that
>> they would obsolete TXHASH, since, in the presence of rolling SHA256
>> opcodes, TXHASH could be simulated. However, given that TXHASH can
>> compactly create a hash of large portions of transaction data, it seems
>> unlikely that TXHASH would fall into disuse. Also, a combination of TXHASH
>> and transaction introspection opcodes can be used to build "*subtractive
>> covenants*".
>>
>> The usual way of building a covenant, which we will call "*additive *
>> *covenants*", is to push all the parts of the transaction data you would
>> like to fix onto the stack, hash it all together, and verify the resulting
>> hash matches a fixed value. Another way of building covenants, which we
>> will call "*subtractive covenants*", is to push all the parts of the
>> transaction data you would like to remain free onto the stack. Then use
>> rolling SHA256 opcodes starting from a fixed midstate that commits to a
>> prefix of the transaction hash data. The free parts are hashed into that
>> midstate. Finally, the resulting hash value is verified to match a value
>> returned by TXHASH. The ability to nicely build subtractive covenants
>> depends on the details of how the TXHASH hash value is constructed,
>> something that I'm told CTV has given consideration to.
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 23164 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-01-26 22:16 ` Jeremy
2022-01-27 4:20 ` James Lu
@ 2022-01-27 19:16 ` Russell O'Connor
2022-01-28 0:18 ` James O'Beirne
1 sibling, 1 reply; 58+ messages in thread
From: Russell O'Connor @ 2022-01-27 19:16 UTC (permalink / raw)
To: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 18566 bytes --]
I am sensitive to technical debt and soft fork processes, and I don't
believe I'm unordinary particular about these issues. Once implemented,
opcodes must be supported and maintained indefinitely. Some opcodes are
easier to maintain than others. These particular opcodes involve caching
of hash computations and, for that reason, I would judge them to be of
moderate complexity.
But more importantly, soft-forks are inherently a risky process, so we
should be getting as much value out of them as we reasonably can. I don't
think implementing a CTV opcode that we expect to largely be obsoleted by a
TXHASH at a later date is yielding good value from a soft fork process.
The strongest argument I can make in favour of CTV would be something like:
"We definitely want bare CTV and if we are going to add CTV to legacy
script (since we cannot use TXHASH in legacy script), then it is actually
easier not to exclude it from tapscript, even if we plan to add TXHASH to
tapscript as well."
But that argument basically rests the entire value of CTV on the shoulders
of bare CTV. As I understand, the argument for why we want bare CTV,
instead of just letting people use tapscript, involves the finer details of
weight calculations, and I haven't really reviewed that aspect yet. I
think it would need to be pretty compelling to make it worthwhile to add
CTV for that one use case.
Regarding "OP_TXHASH+CSFSV doesn't seem to be the 'full' set of things
needed", I totally agree we will want more things such as CAT, rolling
SHA256 opcodes, wider arithmetic, pushing amounts onto the stack, some kind
of tapleaf manipulation and/or TWEAKVERIFY. For now, I only want to argue
TXHASH+CSFSV is better than CTV+APO because it gives us more value, namely
oracle signature verification. In particular, I want to argue that
TXHASH's push semantics is better that CTV's verify semantics because it
composes better by not needing to carry an extra 32-bytes (per instance) in
the witness data. I expect that in a world of full recursive covenants,
TXHASH would still be useful as a fast and cheap way to verify the
"payload" of these covenants, i.e. that a transaction is paying a certain,
possibly large, set of addresses certain specific amounts of money. And
even if not, TXHASH+CSFSV would still be the way that eltoo would be
implemented under this proposal.
On Wed, Jan 26, 2022 at 5:16 PM Jeremy <jlrubin@mit.edu> wrote:
> Hi Russell,
>
> Thanks for this email, it's great to see this approach described.
>
> A few preliminary notes of feedback:
>
> 1) a Verify approach can be made to work for OP_TXHASH (even with CTV
> as-is) E.g., suppose a semantic added for a single byte stack[-1] sighash
> flag to read the hash at stack[-2], then the hash can be passed in instead
> of put on the stack. This has the disadvantage of larger witnesses, but the
> advantage of allowing undefined sighash flags to pass for any hash type.
> 2) using the internal key for APO covenants is not an option because it
> makes transaction construction interactive and precludes contracts with a
> NUMS point taproot key. Instead, if you want similar savings, you should
> advocate an OP_GENERATOR which puts G on the stack. Further, an untagged
> APO variant which has split R and S values would permit something like
> <sig> OP_GENERATOR OP_GENERATOR CHECKSIGAPO, which would be only 2 more
> bytes than CTV.
> 3) I count something like 20 different flags in your proposal. As long as
> flags are under 40 bytes (and 32 assuming we want it to be easy) without
> upgrading math this should be feasible to manipulate on the stack
> programmatically. This is ignoring some of the more flexible additions you
> mention about picking which outputs/inputs are included. However, 20 flags
> means that for testing we would want comprehensive tests and understanding
> for ~1 million different flag combos and the behaviors they expose. I think
> this necessitates a formal model of scripting and transaction validity
> properties. Are there any combinations that might be undesirable?
> 4) Just hashing or not hashing isn't actually that flexible, because it
> doesn't natively let you do things like (for example) TLUV. You really do
> need tx operations for directly manipulating the data on the stack to
> construct the hash if you want more flexible covenants. This happens to be
> compatible with either a Verify or Push approach, since you either
> destructure a pushed hash or build up a hash for a verify.
> 5) Flexible hashing has the potential for quadratic hashing bugs. The
> fields you propose seem to be within similar range to work you could cause
> with a regular OP_HASH256, although you'd want to be careful with some of
> the proposed extensions that you don't create risk of quadratic hashing,
> which seems possible with an output selecting opcode unless you cache
> properly (which might be tricky to do). Overall for the fields explicitly
> mentioned, seems safe, the "possibles" seem to have some more complex
> interactions. E.g., CTV with the ability to pick a subset of outputs would
> be exposed to quadratic hashing.
> 6) Missing field: covering the annex or some sub-range of the annex
> (quadratic hashing issues on the latter)
> 7) It seems simpler to, for many of these fields, push values directly (as
> in OP_PUSHTXDATA from Johnson Lau) because the combo of flags to push the
> hash of a single output's amount to emulate OP_AMOUNT looks 'general but
> annoying'. It may make more sense to do the OP_PUSHTXDATA style opcode
> instead. This also makes it simpler to think about the combinations of
> flags, since it's really N independent multi-byte opcodes.
>
>
> Ultimately if we had OP_TXHASH available "tomorrow", I would be able to
> build out the use cases I care about for CTV (and more). So I don't have an
> opposition on it with regards to lack of function.
>
> However, if one finds the TXHASH approach acceptable, then you should also
> be relatively fine doing APO, CTV, CSFS, TXHASH acceptable in any order
> (whenever "ready"), unless you are particularly sensitive to "technical
> debt" and "soft fork processes". The only costs of doing something for CTV
> or APO given an eventual TXHASH is perhaps a wasted key version or the 32
> byte argument of a NOP opcode and some code to maintain.
>
> Are there other costs I am missing?
>
> However, as it pertains to actual rollout:
>
> - OP_TXHASH+CSFSV doesn't seem to be the "full" set of things needed (we
> still need e.g. OP_CAT, Upgraded >=64 bit Math, TLUV or OP_TWEAK
> OP_TAPBRANCH OP_MANIPULATETAPTREE, and more) to full realize covenanting
> power it intends to introduce.
> - What sort of timeline would it take to ready something like TXHASH (and
> desired friends) given greater scope of testing and analysis (standalone +
> compared to CTV)?
> - Is there opposition from the community to this degree of
> general/recursive covenants?
> - Does it make "more sense" to invest the research and development effort
> that would go into proving TXHASH safe, for example, into Simplicity
> instead?
>
> Overall, *my opinion *is that:
>
> - TXHASH is an acceptable theoretical approach, and I am happy to put more
> thought into it and maybe draft a prototype of it.
> - I prefer CTV as a first step for pragmatic engineering and availability
> timeline reasons.
> - If TXHASH were to take, optimistically, 2 years to develop and review,
> and then 1 year to activate, the "path dependence of software" would put
> Bitcoin in a much better place were we to have CTV within 1 year and
> applications (that are to be a subset of TXHASH later) being built over the
> next few years enhanced in the future by TXHASH's availability.
> - There is an element of expediency meritted for something like CTV
> insofar as it provides primitives to tackle time sensitive issues around
> privacy, scalability, self custody, and decentralization. The
> aforementioned properties may be difficult to reclaim once given away (with
> the exception of perhaps scalability).
> - Bringing CTV to an implemented state of near-unanimous "we could do
> this, technically" is good for concretely driving the process of review for
> any covenant proposals forward, irrespective of if we ultimately activate.
> (I.e., if there were a reason we could not do CTV safely, it would likely
> have implications for any other future covenant)
>
> Concretely, I'm not going to stop advocating for CTV based on the above,
> but I'm very happy to have something new in the mix to consider!
>
> Best,
>
> Jeremy
>
>
> --
> @JeremyRubin <https://twitter.com/JeremyRubin>
> <https://twitter.com/JeremyRubin>
>
>
> On Wed, Jan 26, 2022 at 9:23 AM Russell O'Connor via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Recapping the relationship between CTV and ANYPREVOUT::
>>
>> It is known that there is a significant amount of overlap in the
>> applications that are enabled by the CTV and ANYPREVOUT proposals despite
>> the fact that their primary applications (congestion control for CTV and
>> eltoo lightning channels for ANYPREVOUT) are quite distinct.
>> In particular, ANYPREVOUT can enable most of the applications of CTV,
>> albeit with a higher cost. The primary functionality of CTV is to allow a
>> scriptPubKey to make a commitment to its spending transaction's hash with
>> the input's TXID excluded from the hash. This exclusion is necessary
>> because the scriptPubKey is hashed into the input's TXID, and including the
>> TXID would cause a cycle of hash commitments, which is impossible to
>> construct. On the other hand, ANYPREVOUT defines a signature hash mode
>> that similarly excludes the inputs TXID for its purpose of rebindable
>> signatures.
>>
>> This means that ANYPREVOUT can mimic most of the properties of CTV by
>> committing both a public key along with an ANYPREVOUT signature inside
>> scriptPubKey. In fact, the only reason Bitcoin doesn't have covenants
>> today is due to this cycle between scriptPubKeys and the TXIDs that occur
>> in all the sighash modes.
>>
>> The major differences between simulating CTV via ANYPREVOUT and the
>> actual CTV proposal is: (1) The cost of simulating CTV. With CTV the
>> spending transaction is committed using a hash of 32 bytes, while
>> simulating it with ANYPREVOUT requires 64 bytes for a signature, and 32
>> bytes for some public key, plus a few more bytes for various flags. Some
>> of that cost could be reduced by using the inner public key (1 byte
>> representation) and, if we had CAT, maybe by assembling the signature from
>> reusable pieces (i.e. setting the nonce of the commited signature equal to
>> the public key).
>>
>> The other major difference is: (2) CTV's transaction hash covers values
>> such as the number of inputs in the transaction and their sequence numbers,
>> which ANYPREVOUT does not cover. CTV's hash contains enough information so
>> that when combined with the missing TXIDs, you can compute the TXID of the
>> spending transaction. In particular if the number of inputs is committed
>> to being 1, once the scriptpubkey's transaction id is known and committed
>> to the blockchain, the TXID of its spending transaction is deducible. And
>> if that transaction has outputs that have CTV commitments in them, you can
>> deduce their spending TXIDs in turn. While this is a pretty neat feature,
>> something that ANYPREVOUT cannot mimic, the main application for it is
>> listed as using congestion control to fund lightning channels, fixing their
>> TXIDs in advance of them being placed on chain. However, if ANYPREVOUT
>> were used to mimic CTV, then likely it would be eltoo channels that would
>> be funded, and it isn't necessary to know the TXIDs of eltoo channels in
>> advance in order to use them.
>>
>>
>>
>> An Alternative Proposal::
>>
>> Given the overlap in functionality between CTV and ANYPREVOUT, I think it
>> makes sense to decompose their operations into their constituent pieces and
>> reassemble their behaviour programmatically. To this end, I'd like to
>> instead propose OP_TXHASH and OP_CHECKSIGFROMSTACKVERIFY.
>>
>> OP_TXHASH would pop a txhash flag from the stack and compute a (tagged)
>> txhash in accordance with that flag, and push the resulting hash onto the
>> stack.
>> OP_CHECKSIGFROMSTACKVERIFY would pop a pubkey, message, and signature
>> from the stack and fail if the signature does not verify on that message.
>>
>> CTV and TXHASH have roughly equivalent functionality. 'CTV DROP' can be
>> simulated by '<ctv_style_flag> TXHASH EQUALVERIFY'. The reverse is also
>> true where '<ctv_style_flag> TXHASH' can be simulated by CTV by
>> '<ctv-result-from-witness-stack> CTV', however, as you can see, simulating
>> TXHASH from CTV is much more expensive than the other way around, because
>> the resulting 32-byte hash result must be included as part of the witness
>> stack.
>>
>> '<anyprevout-pubkey> CHECKSIGVERIFY can be simulated by '<apo_style_flag>
>> TXHASH <pubkey> CHECKSIGFROMSTACKVERIFY'. Here we see the advantage of
>> pushing the hash value onto the stack. APO can be simulated without
>> needing to include a copy of the resulting txhash inside the witness data.
>>
>> In addition to the CTV and ANYPREVOUT applications, with
>> CHECKSIGFROMSTACKVERIFY we can verify signatures on arbitrary messages
>> signed by oracles for oracle applications. This is where we see the
>> benefit of decomposing operations into primitive pieces. By giving users
>> the ability to program their own use cases from components, we get more
>> applications out of fewer op codes!
>>
>>
>>
>> Caveats::
>>
>> First, I acknowledge that replicating the behaviour of CTV and ANYPREVOUT
>> does cost a few more bytes than using the custom purpose built proposals
>> themselves. That is the price to be paid when we choose the ability to
>> program solutions from pieces. But we get to reap the advantages of being
>> able to build more applications from these pieces.
>>
>> Unlike CTV, TXHASH is not NOP-compatable and can only be implemented
>> within tapscript. In particular, bare CTV isn't possible with this
>> proposal. However, this proposal doesn't preclude the possibility of
>> having CTV added to legacy script in while having TXHASH added to tapscript.
>>
>> For similar reasons, TXHASH is not amenable to extending the set of
>> txflags at a later date. In theory, one could have TXHASH
>> abort-with-success when encountering an unknown set of flags. However,
>> this would make analyzing tapscript much more difficult. Tapscripts would
>> then be able to abort with success or failure depending on the order script
>> fragments are assembled and executed, and getting the order incorrect would
>> be catastrophic. This behavior is manifestly different from the current
>> batch of OP_SUCCESS opcodes that abort-with-success just by their mere
>> presence, whether they would be executed or not.
>>
>> I believe the difficulties with upgrading TXHASH can be mitigated by
>> designing a robust set of TXHASH flags from the start. For example having
>> bits to control whether (1) the version is covered; (2) the locktime is
>> covered; (3) txids are covered; (4) sequence numbers are covered; (5) input
>> amounts are covered; (6) input scriptpubkeys are covered; (7) number of
>> inputs is covered; (8) output amounts are covered; (9) output scriptpubkeys
>> are covered; (10) number of outputs is covered; (11) the tapbranch is
>> covered; (12) the tapleaf is covered; (13) the opseparator value is
>> covered; (14) whether all, one, or no inputs are covered; (15) whether all,
>> one or no outputs are covered; (16) whether the one input position is
>> covered; (17) whether the one output position is covered; (18) whether the
>> sighash flags are covered or not (note: whether or not the sighash flags
>> are or are not covered must itself be covered). Possibly specifying which
>> input or output position is covered in the single case and whether the
>> position is relative to the input's position or is an absolute position.
>>
>> That all said, even if other txhash flag modes are needed in the future,
>> adding TXHASH2 always remains an option.
>>
>>
>>
>> Interactions with potential future opcodes::
>>
>> We should give some consideration on how these opcodes may interact with
>> future opcodes such as CAT, rolling SHA256 opcodes, or how it might
>> interface with other covenant opcodes that may do things like, directly
>> push input or output amounts onto the stack for computation purposes,
>> opcodes which have been added to the Elements project.
>>
>> With CAT and/or rolling SHA256 opcodes and/or existing SHA256 opcodes,
>> the CHECKSIGFROMSTACKVERIFY could verify signatures on programmatically
>> assembled messages. Also, in combination with multiple calls to TXHASH,
>> could be used to create signatures that commit to complex subsets of
>> transaction data.
>>
>> If new opcodes are added to push parts of the transaction data direction
>> onto the stack, e.g. OP_INSPECTOUTPUTVALUE, there is perhaps concern that
>> they would obsolete TXHASH, since, in the presence of rolling SHA256
>> opcodes, TXHASH could be simulated. However, given that TXHASH can
>> compactly create a hash of large portions of transaction data, it seems
>> unlikely that TXHASH would fall into disuse. Also, a combination of TXHASH
>> and transaction introspection opcodes can be used to build "*subtractive
>> covenants*".
>>
>> The usual way of building a covenant, which we will call "*additive *
>> *covenants*", is to push all the parts of the transaction data you would
>> like to fix onto the stack, hash it all together, and verify the resulting
>> hash matches a fixed value. Another way of building covenants, which we
>> will call "*subtractive covenants*", is to push all the parts of the
>> transaction data you would like to remain free onto the stack. Then use
>> rolling SHA256 opcodes starting from a fixed midstate that commits to a
>> prefix of the transaction hash data. The free parts are hashed into that
>> midstate. Finally, the resulting hash value is verified to match a value
>> returned by TXHASH. The ability to nicely build subtractive covenants
>> depends on the details of how the TXHASH hash value is constructed,
>> something that I'm told CTV has given consideration to.
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
[-- Attachment #2: Type: text/html, Size: 23870 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-01-27 19:16 ` Russell O'Connor
@ 2022-01-28 0:18 ` James O'Beirne
2022-01-28 13:14 ` Michael Folkson
` (2 more replies)
0 siblings, 3 replies; 58+ messages in thread
From: James O'Beirne @ 2022-01-28 0:18 UTC (permalink / raw)
To: Russell O'Connor, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 21308 bytes --]
> I don't think implementing a CTV opcode that we expect to largely be
obsoleted by a TXHASH at a later date is yielding good value from a soft
fork process.
This presumes the eventual adoption of TXHASH (or something like it).
You're presenting a novel idea that, as far as I know, hasn't had much time
to bake in public. Like Jeremy, I'm concerned by the combinatorial growth
of flags and the implications that has for testing. Caching for something
like TXHASH looks to me like a whole different ballgame relative to CTV,
which has a single kind of hash.
Even if we were to adopt something like TXHASH, how long is it going to
take to develop, test, and release? My guess is "a while" - in the
meantime, users of Bitcoin are without a vault strategy that doesn't
require either presigning transactions with ephemeral keys (operationally
difficult) or multisig configurations that would make Rube Goldberg blush
(operationally difficult and precarious). The utility of vaulting seems
underappreciated among consensus devs and it's something I'd like to write
about soon in a separate post.
> The strongest argument I can make in favour of CTV would be something
like: "We definitely want bare CTV and if we are going to add CTV to legacy
script (since we cannot use TXHASH in legacy script), then it is actually
easier not to exclude it from tapscript, even if we plan to add TXHASH to
tapscript as well."
Another argument for CTV (which I find especially persuasive) is its
simplicity - it's relatively easy to reason about and, at this point,
pretty well understood. It seems like a low-risk change relative to some of
the other covenant proposals, nearly all of which elicit a good deal of
headscratching (at least from me) and seem to require not only larger
on-chain footprints but sizable code changes.
> I am sensitive to technical debt and soft fork processes
If OP_CTV ends up being the most practical approach for vaulting - among
other things - in terms of weight (which it seems to be at the moment) I
don't think "technical debt" is an applicable term.
On Thu, Jan 27, 2022 at 5:20 PM Russell O'Connor via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> I am sensitive to technical debt and soft fork processes, and I don't
> believe I'm unordinary particular about these issues. Once implemented,
> opcodes must be supported and maintained indefinitely. Some opcodes are
> easier to maintain than others. These particular opcodes involve caching
> of hash computations and, for that reason, I would judge them to be of
> moderate complexity.
>
> But more importantly, soft-forks are inherently a risky process, so we
> should be getting as much value out of them as we reasonably can. I don't
> think implementing a CTV opcode that we expect to largely be obsoleted by a
> TXHASH at a later date is yielding good value from a soft fork process.
>
> The strongest argument I can make in favour of CTV would be something
> like: "We definitely want bare CTV and if we are going to add CTV to legacy
> script (since we cannot use TXHASH in legacy script), then it is actually
> easier not to exclude it from tapscript, even if we plan to add TXHASH to
> tapscript as well."
>
> But that argument basically rests the entire value of CTV on the shoulders
> of bare CTV. As I understand, the argument for why we want bare CTV,
> instead of just letting people use tapscript, involves the finer details of
> weight calculations, and I haven't really reviewed that aspect yet. I
> think it would need to be pretty compelling to make it worthwhile to add
> CTV for that one use case.
>
>
> Regarding "OP_TXHASH+CSFSV doesn't seem to be the 'full' set of things
> needed", I totally agree we will want more things such as CAT, rolling
> SHA256 opcodes, wider arithmetic, pushing amounts onto the stack, some kind
> of tapleaf manipulation and/or TWEAKVERIFY. For now, I only want to argue
> TXHASH+CSFSV is better than CTV+APO because it gives us more value, namely
> oracle signature verification. In particular, I want to argue that
> TXHASH's push semantics is better that CTV's verify semantics because it
> composes better by not needing to carry an extra 32-bytes (per instance) in
> the witness data. I expect that in a world of full recursive covenants,
> TXHASH would still be useful as a fast and cheap way to verify the
> "payload" of these covenants, i.e. that a transaction is paying a certain,
> possibly large, set of addresses certain specific amounts of money. And
> even if not, TXHASH+CSFSV would still be the way that eltoo would be
> implemented under this proposal.
>
> On Wed, Jan 26, 2022 at 5:16 PM Jeremy <jlrubin@mit.edu> wrote:
>
>> Hi Russell,
>>
>> Thanks for this email, it's great to see this approach described.
>>
>> A few preliminary notes of feedback:
>>
>> 1) a Verify approach can be made to work for OP_TXHASH (even with CTV
>> as-is) E.g., suppose a semantic added for a single byte stack[-1] sighash
>> flag to read the hash at stack[-2], then the hash can be passed in instead
>> of put on the stack. This has the disadvantage of larger witnesses, but the
>> advantage of allowing undefined sighash flags to pass for any hash type.
>> 2) using the internal key for APO covenants is not an option because it
>> makes transaction construction interactive and precludes contracts with a
>> NUMS point taproot key. Instead, if you want similar savings, you should
>> advocate an OP_GENERATOR which puts G on the stack. Further, an untagged
>> APO variant which has split R and S values would permit something like
>> <sig> OP_GENERATOR OP_GENERATOR CHECKSIGAPO, which would be only 2 more
>> bytes than CTV.
>> 3) I count something like 20 different flags in your proposal. As long as
>> flags are under 40 bytes (and 32 assuming we want it to be easy) without
>> upgrading math this should be feasible to manipulate on the stack
>> programmatically. This is ignoring some of the more flexible additions you
>> mention about picking which outputs/inputs are included. However, 20 flags
>> means that for testing we would want comprehensive tests and understanding
>> for ~1 million different flag combos and the behaviors they expose. I think
>> this necessitates a formal model of scripting and transaction validity
>> properties. Are there any combinations that might be undesirable?
>> 4) Just hashing or not hashing isn't actually that flexible, because it
>> doesn't natively let you do things like (for example) TLUV. You really do
>> need tx operations for directly manipulating the data on the stack to
>> construct the hash if you want more flexible covenants. This happens to be
>> compatible with either a Verify or Push approach, since you either
>> destructure a pushed hash or build up a hash for a verify.
>> 5) Flexible hashing has the potential for quadratic hashing bugs. The
>> fields you propose seem to be within similar range to work you could cause
>> with a regular OP_HASH256, although you'd want to be careful with some of
>> the proposed extensions that you don't create risk of quadratic hashing,
>> which seems possible with an output selecting opcode unless you cache
>> properly (which might be tricky to do). Overall for the fields explicitly
>> mentioned, seems safe, the "possibles" seem to have some more complex
>> interactions. E.g., CTV with the ability to pick a subset of outputs would
>> be exposed to quadratic hashing.
>> 6) Missing field: covering the annex or some sub-range of the annex
>> (quadratic hashing issues on the latter)
>> 7) It seems simpler to, for many of these fields, push values directly
>> (as in OP_PUSHTXDATA from Johnson Lau) because the combo of flags to push
>> the hash of a single output's amount to emulate OP_AMOUNT looks 'general
>> but annoying'. It may make more sense to do the OP_PUSHTXDATA style opcode
>> instead. This also makes it simpler to think about the combinations of
>> flags, since it's really N independent multi-byte opcodes.
>>
>>
>> Ultimately if we had OP_TXHASH available "tomorrow", I would be able to
>> build out the use cases I care about for CTV (and more). So I don't have an
>> opposition on it with regards to lack of function.
>>
>> However, if one finds the TXHASH approach acceptable, then you should
>> also be relatively fine doing APO, CTV, CSFS, TXHASH acceptable in any
>> order (whenever "ready"), unless you are particularly sensitive to
>> "technical debt" and "soft fork processes". The only costs of doing
>> something for CTV or APO given an eventual TXHASH is perhaps a wasted key
>> version or the 32 byte argument of a NOP opcode and some code to maintain.
>>
>> Are there other costs I am missing?
>>
>> However, as it pertains to actual rollout:
>>
>> - OP_TXHASH+CSFSV doesn't seem to be the "full" set of things needed (we
>> still need e.g. OP_CAT, Upgraded >=64 bit Math, TLUV or OP_TWEAK
>> OP_TAPBRANCH OP_MANIPULATETAPTREE, and more) to full realize covenanting
>> power it intends to introduce.
>> - What sort of timeline would it take to ready something like TXHASH (and
>> desired friends) given greater scope of testing and analysis (standalone +
>> compared to CTV)?
>> - Is there opposition from the community to this degree of
>> general/recursive covenants?
>> - Does it make "more sense" to invest the research and development effort
>> that would go into proving TXHASH safe, for example, into Simplicity
>> instead?
>>
>> Overall, *my opinion *is that:
>>
>> - TXHASH is an acceptable theoretical approach, and I am happy to put
>> more thought into it and maybe draft a prototype of it.
>> - I prefer CTV as a first step for pragmatic engineering and availability
>> timeline reasons.
>> - If TXHASH were to take, optimistically, 2 years to develop and review,
>> and then 1 year to activate, the "path dependence of software" would put
>> Bitcoin in a much better place were we to have CTV within 1 year and
>> applications (that are to be a subset of TXHASH later) being built over the
>> next few years enhanced in the future by TXHASH's availability.
>> - There is an element of expediency meritted for something like CTV
>> insofar as it provides primitives to tackle time sensitive issues around
>> privacy, scalability, self custody, and decentralization. The
>> aforementioned properties may be difficult to reclaim once given away (with
>> the exception of perhaps scalability).
>> - Bringing CTV to an implemented state of near-unanimous "we could do
>> this, technically" is good for concretely driving the process of review for
>> any covenant proposals forward, irrespective of if we ultimately activate.
>> (I.e., if there were a reason we could not do CTV safely, it would likely
>> have implications for any other future covenant)
>>
>> Concretely, I'm not going to stop advocating for CTV based on the above,
>> but I'm very happy to have something new in the mix to consider!
>>
>> Best,
>>
>> Jeremy
>>
>>
>> --
>> @JeremyRubin <https://twitter.com/JeremyRubin>
>> <https://twitter.com/JeremyRubin>
>>
>>
>> On Wed, Jan 26, 2022 at 9:23 AM Russell O'Connor via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>>> Recapping the relationship between CTV and ANYPREVOUT::
>>>
>>> It is known that there is a significant amount of overlap in the
>>> applications that are enabled by the CTV and ANYPREVOUT proposals despite
>>> the fact that their primary applications (congestion control for CTV and
>>> eltoo lightning channels for ANYPREVOUT) are quite distinct.
>>> In particular, ANYPREVOUT can enable most of the applications of CTV,
>>> albeit with a higher cost. The primary functionality of CTV is to allow a
>>> scriptPubKey to make a commitment to its spending transaction's hash with
>>> the input's TXID excluded from the hash. This exclusion is necessary
>>> because the scriptPubKey is hashed into the input's TXID, and including the
>>> TXID would cause a cycle of hash commitments, which is impossible to
>>> construct. On the other hand, ANYPREVOUT defines a signature hash mode
>>> that similarly excludes the inputs TXID for its purpose of rebindable
>>> signatures.
>>>
>>> This means that ANYPREVOUT can mimic most of the properties of CTV by
>>> committing both a public key along with an ANYPREVOUT signature inside
>>> scriptPubKey. In fact, the only reason Bitcoin doesn't have covenants
>>> today is due to this cycle between scriptPubKeys and the TXIDs that occur
>>> in all the sighash modes.
>>>
>>> The major differences between simulating CTV via ANYPREVOUT and the
>>> actual CTV proposal is: (1) The cost of simulating CTV. With CTV the
>>> spending transaction is committed using a hash of 32 bytes, while
>>> simulating it with ANYPREVOUT requires 64 bytes for a signature, and 32
>>> bytes for some public key, plus a few more bytes for various flags. Some
>>> of that cost could be reduced by using the inner public key (1 byte
>>> representation) and, if we had CAT, maybe by assembling the signature from
>>> reusable pieces (i.e. setting the nonce of the commited signature equal to
>>> the public key).
>>>
>>> The other major difference is: (2) CTV's transaction hash covers values
>>> such as the number of inputs in the transaction and their sequence numbers,
>>> which ANYPREVOUT does not cover. CTV's hash contains enough information so
>>> that when combined with the missing TXIDs, you can compute the TXID of the
>>> spending transaction. In particular if the number of inputs is committed
>>> to being 1, once the scriptpubkey's transaction id is known and committed
>>> to the blockchain, the TXID of its spending transaction is deducible. And
>>> if that transaction has outputs that have CTV commitments in them, you can
>>> deduce their spending TXIDs in turn. While this is a pretty neat feature,
>>> something that ANYPREVOUT cannot mimic, the main application for it is
>>> listed as using congestion control to fund lightning channels, fixing their
>>> TXIDs in advance of them being placed on chain. However, if ANYPREVOUT
>>> were used to mimic CTV, then likely it would be eltoo channels that would
>>> be funded, and it isn't necessary to know the TXIDs of eltoo channels in
>>> advance in order to use them.
>>>
>>>
>>>
>>> An Alternative Proposal::
>>>
>>> Given the overlap in functionality between CTV and ANYPREVOUT, I think
>>> it makes sense to decompose their operations into their constituent pieces
>>> and reassemble their behaviour programmatically. To this end, I'd like to
>>> instead propose OP_TXHASH and OP_CHECKSIGFROMSTACKVERIFY.
>>>
>>> OP_TXHASH would pop a txhash flag from the stack and compute a (tagged)
>>> txhash in accordance with that flag, and push the resulting hash onto the
>>> stack.
>>> OP_CHECKSIGFROMSTACKVERIFY would pop a pubkey, message, and signature
>>> from the stack and fail if the signature does not verify on that message.
>>>
>>> CTV and TXHASH have roughly equivalent functionality. 'CTV DROP' can be
>>> simulated by '<ctv_style_flag> TXHASH EQUALVERIFY'. The reverse is also
>>> true where '<ctv_style_flag> TXHASH' can be simulated by CTV by
>>> '<ctv-result-from-witness-stack> CTV', however, as you can see, simulating
>>> TXHASH from CTV is much more expensive than the other way around, because
>>> the resulting 32-byte hash result must be included as part of the witness
>>> stack.
>>>
>>> '<anyprevout-pubkey> CHECKSIGVERIFY can be simulated by
>>> '<apo_style_flag> TXHASH <pubkey> CHECKSIGFROMSTACKVERIFY'. Here we see
>>> the advantage of pushing the hash value onto the stack. APO can be
>>> simulated without needing to include a copy of the resulting txhash inside
>>> the witness data.
>>>
>>> In addition to the CTV and ANYPREVOUT applications, with
>>> CHECKSIGFROMSTACKVERIFY we can verify signatures on arbitrary messages
>>> signed by oracles for oracle applications. This is where we see the
>>> benefit of decomposing operations into primitive pieces. By giving users
>>> the ability to program their own use cases from components, we get more
>>> applications out of fewer op codes!
>>>
>>>
>>>
>>> Caveats::
>>>
>>> First, I acknowledge that replicating the behaviour of CTV and
>>> ANYPREVOUT does cost a few more bytes than using the custom purpose built
>>> proposals themselves. That is the price to be paid when we choose the
>>> ability to program solutions from pieces. But we get to reap the
>>> advantages of being able to build more applications from these pieces.
>>>
>>> Unlike CTV, TXHASH is not NOP-compatable and can only be implemented
>>> within tapscript. In particular, bare CTV isn't possible with this
>>> proposal. However, this proposal doesn't preclude the possibility of
>>> having CTV added to legacy script in while having TXHASH added to tapscript.
>>>
>>> For similar reasons, TXHASH is not amenable to extending the set of
>>> txflags at a later date. In theory, one could have TXHASH
>>> abort-with-success when encountering an unknown set of flags. However,
>>> this would make analyzing tapscript much more difficult. Tapscripts would
>>> then be able to abort with success or failure depending on the order script
>>> fragments are assembled and executed, and getting the order incorrect would
>>> be catastrophic. This behavior is manifestly different from the current
>>> batch of OP_SUCCESS opcodes that abort-with-success just by their mere
>>> presence, whether they would be executed or not.
>>>
>>> I believe the difficulties with upgrading TXHASH can be mitigated by
>>> designing a robust set of TXHASH flags from the start. For example having
>>> bits to control whether (1) the version is covered; (2) the locktime is
>>> covered; (3) txids are covered; (4) sequence numbers are covered; (5) input
>>> amounts are covered; (6) input scriptpubkeys are covered; (7) number of
>>> inputs is covered; (8) output amounts are covered; (9) output scriptpubkeys
>>> are covered; (10) number of outputs is covered; (11) the tapbranch is
>>> covered; (12) the tapleaf is covered; (13) the opseparator value is
>>> covered; (14) whether all, one, or no inputs are covered; (15) whether all,
>>> one or no outputs are covered; (16) whether the one input position is
>>> covered; (17) whether the one output position is covered; (18) whether the
>>> sighash flags are covered or not (note: whether or not the sighash flags
>>> are or are not covered must itself be covered). Possibly specifying which
>>> input or output position is covered in the single case and whether the
>>> position is relative to the input's position or is an absolute position.
>>>
>>> That all said, even if other txhash flag modes are needed in the future,
>>> adding TXHASH2 always remains an option.
>>>
>>>
>>>
>>> Interactions with potential future opcodes::
>>>
>>> We should give some consideration on how these opcodes may interact with
>>> future opcodes such as CAT, rolling SHA256 opcodes, or how it might
>>> interface with other covenant opcodes that may do things like, directly
>>> push input or output amounts onto the stack for computation purposes,
>>> opcodes which have been added to the Elements project.
>>>
>>> With CAT and/or rolling SHA256 opcodes and/or existing SHA256 opcodes,
>>> the CHECKSIGFROMSTACKVERIFY could verify signatures on programmatically
>>> assembled messages. Also, in combination with multiple calls to TXHASH,
>>> could be used to create signatures that commit to complex subsets of
>>> transaction data.
>>>
>>> If new opcodes are added to push parts of the transaction data direction
>>> onto the stack, e.g. OP_INSPECTOUTPUTVALUE, there is perhaps concern that
>>> they would obsolete TXHASH, since, in the presence of rolling SHA256
>>> opcodes, TXHASH could be simulated. However, given that TXHASH can
>>> compactly create a hash of large portions of transaction data, it seems
>>> unlikely that TXHASH would fall into disuse. Also, a combination of TXHASH
>>> and transaction introspection opcodes can be used to build "*subtractive
>>> covenants*".
>>>
>>> The usual way of building a covenant, which we will call "*additive *
>>> *covenants*", is to push all the parts of the transaction data you
>>> would like to fix onto the stack, hash it all together, and verify the
>>> resulting hash matches a fixed value. Another way of building covenants,
>>> which we will call "*subtractive covenants*", is to push all the parts
>>> of the transaction data you would like to remain free onto the stack. Then
>>> use rolling SHA256 opcodes starting from a fixed midstate that commits to a
>>> prefix of the transaction hash data. The free parts are hashed into that
>>> midstate. Finally, the resulting hash value is verified to match a value
>>> returned by TXHASH. The ability to nicely build subtractive covenants
>>> depends on the details of how the TXHASH hash value is constructed,
>>> something that I'm told CTV has given consideration to.
>>> _______________________________________________
>>> bitcoin-dev mailing list
>>> bitcoin-dev@lists.linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>>
>> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 27051 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-01-26 17:20 [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT Russell O'Connor
2022-01-26 22:16 ` Jeremy
@ 2022-01-28 1:34 ` Anthony Towns
2022-01-28 13:56 ` Russell O'Connor
2022-02-08 3:40 ` Rusty Russell
2 siblings, 1 reply; 58+ messages in thread
From: Anthony Towns @ 2022-01-28 1:34 UTC (permalink / raw)
To: Russell O'Connor, Bitcoin Protocol Discussion
On Wed, Jan 26, 2022 at 12:20:10PM -0500, Russell O'Connor via bitcoin-dev wrote:
> Recapping the relationship between CTV and ANYPREVOUT::
> While this is a pretty neat feature,
> something that ANYPREVOUT cannot mimic, the main application for it is
> listed as using congestion control to fund lightning channels, fixing their
> TXIDs in advance of them being placed on chain. However, if ANYPREVOUT
> were used to mimic CTV, then likely it would be eltoo channels that would
> be funded, and it isn't necessary to know the TXIDs of eltoo channels in
> advance in order to use them.
Even if they weren't eltoo channels, they could be updated lightning penalty
channels signed with APO signatures so that the txid wasn't crucial. So
I don't think this would require all the work to update to eltoo just to
have this feature, if APO were available without CTV per se.
> An Alternative Proposal::
> ...
> For similar reasons, TXHASH is not amenable to extending the set of txflags
> at a later date.
> I believe the difficulties with upgrading TXHASH can be mitigated by
> designing a robust set of TXHASH flags from the start. For example having
> bits to control whether [...]
I don't think that's really feasible -- eg, what you propose don't cover
SIGHASH_GROUP:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-July/019243.html
> That all said, even if other txhash flag modes are needed in the future,
> adding TXHASH2 always remains an option.
I think baking this in from day 0 might be better: make TXHASH be
a multibyte opcode, so that when you decode "0xBB" on the stack,
you also decode a serialize.h:VarInt as the version number. Version 0
(0xBB00) gives hashes corresponding to bip342, version 1 (0xBB01) gives
hashes corresponding to bip118 (anyprevout), anything else remains as
OP_SUCCESS behaviour, and you retain a pretty compact encoding even if
we somehow eventually end up needing hundreds or thousands of different
TXHASH versions.
Because the version here is part of the opcode rather than pulled from
the stack, I think this preserves any benefits related to composition
or analysis, but is otherwise still pretty general. I'm imagining that
the idea would be to be consistent between CHECKSIG key versions and
TXHASH versions.
So I think just designing it this way means TXHASH *would* be "amenable
to extending the set of txflags at a later date."
> '<anyprevout-pubkey> CHECKSIGVERIFY can be simulated by '<apo_style_flag> TXHASH <pubkey> CHECKSIGFROMSTACKVERIFY'.
I don't think that's quite right. BIP 118 anyprevout is done by taking
the pubkey "P", marking it as "APO-capable" (by prefixing it with 0x01),
and then getting a sighash and sig from the witness. Doing the same
with TXHASH/CSFSV would just be replacing "<APO:P> CHECKSIGVERIFY" with
"TXHASH <P> CSFSV" with the witness providing both the signature and
txhash flag, just as separate elements rather than concatenated. (The
"APO-capable" part is implicit in the "TXHASH" opcode)
> In addition to the CTV and ANYPREVOUT applications, with
> CHECKSIGFROMSTACKVERIFY we can verify signatures on arbitrary messages
> signed by oracles for oracle applications. This is where we see the
> benefit of decomposing operations into primitive pieces. By giving users
> the ability to program their own use cases from components, we get more
> applications out of fewer op codes!
While I see the appeal of this from a language design perspective;
I'm not sure it's really the goal we want. When I look at bitcoin's
existing script, I see a lot of basic opcodes to do simple arithmetic and
manipulate the stack in various ways, but the opcodes that are actually
useful are more "do everything at once" things like check(multi)sig or
sha256. It seems like what's most useful on the blockchain is a higher
level language, rather than more of blockchain assembly language made
up of small generic pieces. I guess "program their own use cases from
components" seems to be coming pretty close to "write your own crypto
algorithms" here...
I'm not really sure what the dividing line there is, or even which side
TXHASH would be on. I'm not even totally convinced that the "high level
language" should be describing what consensus provides rather than some
layer on top that people compile (a la miniscript). Just trying to put
into words why I'm not 100% comfortable with the principle per se.
One thing I've thought about is an opcode like "POP_SIGDATA" which would
populate a new "register" called "sigdata", which would then be added
to the message being signed. That's a generalisation of tapscript's
behaviour for "codeseparator" essentially. That is,
x POP_SIGDATA p CHECKSIG
would be roughly the same as
TXHASH x CAT SHA256SUM p CHECKSIGFROMSTACK
I think "POP_SIGDATA" makes for an interesting counterpart to
"PUSH_ANNEXITEM" -- we implicitly commit to all the annex items in
signatures, so PUSH_ANNEXITEM would give a way to use signed data that's
given verbatim in the witness in further calculations; but POP_SIGDATA
would do the opposite, allowing you to require data that's the result
of calculations and not explicitly spelled out in the witness be signed.
You could implement CHECKSIGFROMSTACK using that, ie:
sig x p CHECKSIGFROMSTACK
is the same as:
sig' x POP_SIGDATA p CHECKSIG
provided sig' applies a new "SIGHASH_NO_TX_DATA_AT_ALL" sighash flag to
"sig" that just does what it says.
You could likewise implement CTV as an extension to CHECKSIG -- define a
new pubkey type that's just the constant "0x0000" and have the "signature"
be valid if it's an exact match for the corresponding message hash. You
could bump the key to "0x0001" to introduce new hashes; and include a
"sighash" with the "signature" as well perhaps. (Apart from reusing an
existing opcode instead of introducing a new one, and costing some
additional witnss bytes, I don't think that makes much difference
eithr way)
I think the key tradeoff between "x POP_SIGDATA p CHECKSIG" and
"CHECKSIGFROMSTACK" isn't so much that one approach is a couple of bytes
more or less or one claims two opcodes vs just one for the other, but
whether it's common to want to commit to some extra random data alongside
the tx itself, and in the cases where that's desirable, if we can have
a standard way of constructing that and assume everyone will use it; or
if it's important that wallets can design their own way of committing to
the extra data more manually, because it's impotant to support different
approaches in different circumstances.
If we had CTV, POP_SIGDATA, and SIGHASH_NO_TX_DATA_AT_ALL but no OP_CAT,
are there any practical use cases that wouldn't be covered that having
TXHASH/CAT/CHECKSIGFROMSTACK instead would allow? Or where those would
be significantly more convenient/efficient?
(Assume "y x POP_SIGDATA POP_SIGDATA p CHECKSIGVERIFY q CHECKSIG"
commits to a vector [x,y] via p but does not commit to either via q so
that there's some "CAT"-like behaviour available)
I think a difference between "TXHASH EQUALVERIFY" and "CTV" is that
because the idea for TXHASH is to be compatible with CHECKSIGFROMSTACK,
then the messages it hashes should be distinct from anything else you
might ever sign. But for CTV that doesn't matter, because there's no
signature to be reused; so as a result, how the message is hashed can
be simpler, and that in turn may make it easier to do the "subtractive
covenants" and similar.
I guess I don't find that super important -- if you're manually
constructing covenants in script by putting together various bits of
data about a tx, then I guess I think you've already lost the game, and
having to have your script be a little more complicated in order to to
tagged hashes and the like is no big deal.
Concretely:
- I think TXHASH needs to be designed to be upgradable; but I think
that's solvable
- I think it makes sense for TXHASH and CHECKSIG to be synchronised;
so any message digest you can hash via txhash should be signable via
CHECKSIG and vice-versa. Given that, I don't think this approach
replaces APO, just adds to it.
- I think I'd prefer having a single set of message digests shared
between TXHASH and CHECKSIG, than having one set of message digests
for CHECKSIG and a different set for CTV. But that's a design choice
for CTV rather than an advantage of TXHASH over CTV.
- I think defining some OP_NOPx in terms of TXHASH so that it can
be made available without p2sh/segwit/tapscript wrapping would work
fine, if that optimisation is worthwhile
- Even if we favoured CTV over TXHASH for consensus implementation,
I think "TXHASH" seems like a good primitive to use when talking
about script language design...
Cheers,
aj
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-01-28 0:18 ` James O'Beirne
@ 2022-01-28 13:14 ` Michael Folkson
2022-01-28 14:17 ` Anthony Towns
2022-01-28 14:13 ` Russell O'Connor
2022-01-31 2:18 ` Anthony Towns
2 siblings, 1 reply; 58+ messages in thread
From: Michael Folkson @ 2022-01-28 13:14 UTC (permalink / raw)
To: James O'Beirne, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 24056 bytes --]
> Even if we were to adopt something like TXHASH, how long is it going to take to develop, test, and release? My guess is "a while" - in the meantime, users of Bitcoin are without a vault strategy that doesn't require either presigning transactions with ephemeral keys (operationally difficult) or multisig configurations that would make Rube Goldberg blush (operationally difficult and precarious).
To me this seems to be jumping ahead a number of steps from where we are at the current time. If the ecosystem was widely using all the tools available to them at the current time (MuSig(2), Taproot trees to embed complex scripts, Miniscript etc), was testing out upcoming available tools like threshold key aggregation schemes (e.g. FROST) on signets and the final missing piece was a covenant opcode to avoid the deleted key requirement then the argument for urgency would be stronger. I would still share the concerns I and many others have repeated over rushing soft forks and treating mainnet as a testbed for new use cases rather than the final destination for changes that will stand the test of time. But I would be a lot more sympathetic to that argument.
This isn't a criticism of the ecosystem or individual vault projects like Revault, it is clearly still very early. darosior (Revault) is working on getting a first version of Miniscript finalized and in Core [0] and I'm assuming will be part of the effort to get Taproot support in Miniscript assuming that initial effort succeeds. Murch is tracking basic send and receive to the P2TR addresses (not complex scripts, multisig, MuSig(2), merely single key spends) in the ecosystem [1] and there is still a long way to go there.
There are a bunch of covenant opcodes that have been enabled on Liquid [2] that I haven't heard yet of anyone building vault prototypes with. It would be good to get others (TLUV, TXHASH) in future. There is not even a custom signet with CTV (as far as I know) for those who subscribe to the view that we must rush to get CTV activated on mainnet as soon as possible with no thought to what opcodes might follow.
When this discussion focuses on the pros and cons of various proposals and how they are being tested and used in prototypes on signets, sidechains I think it is really productive. But when it gets onto urgency (or worse activation speculation) I am just perplexed. That viewpoint seems to completely ignore where we are currently with Taproot use and tooling (on which most vault designs will presumably build) and even more perplexingly where we are with vault prototypes on signets, sidechains.
I am sure at some point in the future we will have various vault prototypes on signets, sidechains making use of Taproot, Miniscript, MuSig(2), FROST etc and crying out for a covenant opcode or sighash flag to go into production on mainnet. But we seem miles away from that at the present time.
[0]: https://github.com/bitcoin/bitcoin/pull/24147
[1]: https://en.bitcoin.it/wiki/Bech32_adoption
[2]: https://github.com/ElementsProject/elements/blob/master/doc/tapscript_opcodes.md
--
Michael Folkson
Email: michaelfolkson at [protonmail.com](http://protonmail.com/)
Keybase: michaelfolkson
PGP: 43ED C999 9F85 1D40 EAF4 9835 92D6 0159 214C FEE3
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Friday, January 28th, 2022 at 12:18 AM, James O'Beirne via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
>> I don't think implementing a CTV opcode that we expect to largely be obsoleted by a TXHASH at a later date is yielding good value from a soft fork process.
>
> This presumes the eventual adoption of TXHASH (or something like it). You're presenting a novel idea that, as far as I know, hasn't had much time to bake in public. Like Jeremy, I'm concerned by the combinatorial growth of flags and the implications that has for testing. Caching for something like TXHASH looks to me like a whole different ballgame relative to CTV, which has a single kind of hash.
>
> Even if we were to adopt something like TXHASH, how long is it going to take to develop, test, and release? My guess is "a while" - in the meantime, users of Bitcoin are without a vault strategy that doesn't require either presigning transactions with ephemeral keys (operationally difficult) or multisig configurations that would make Rube Goldberg blush (operationally difficult and precarious). The utility of vaulting seems underappreciated among consensus devs and it's something I'd like to write about soon in a separate post.
>
>> The strongest argument I can make in favour of CTV would be something like: "We definitely want bare CTV and if we are going to add CTV to legacy script (since we cannot use TXHASH in legacy script), then it is actually easier not to exclude it from tapscript, even if we plan to add TXHASH to tapscript as well."
>
> Another argument for CTV (which I find especially persuasive) is its simplicity - it's relatively easy to reason about and, at this point, pretty well understood. It seems like a low-risk change relative to some of the other covenant proposals, nearly all of which elicit a good deal of headscratching (at least from me) and seem to require not only larger on-chain footprints but sizable code changes.
>
>> I am sensitive to technical debt and soft fork processes
>
> If OP_CTV ends up being the most practical approach for vaulting - among other things - in terms of weight (which it seems to be at the moment) I don't think "technical debt" is an applicable term.
>
> On Thu, Jan 27, 2022 at 5:20 PM Russell O'Connor via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> I am sensitive to technical debt and soft fork processes, and I don't believe I'm unordinary particular about these issues. Once implemented, opcodes must be supported and maintained indefinitely. Some opcodes are easier to maintain than others. These particular opcodes involve caching of hash computations and, for that reason, I would judge them to be of moderate complexity.
>>
>> But more importantly, soft-forks are inherently a risky process, so we should be getting as much value out of them as we reasonably can. I don't think implementing a CTV opcode that we expect to largely be obsoleted by a TXHASH at a later date is yielding good value from a soft fork process.
>>
>> The strongest argument I can make in favour of CTV would be something like: "We definitely want bare CTV and if we are going to add CTV to legacy script (since we cannot use TXHASH in legacy script), then it is actually easier not to exclude it from tapscript, even if we plan to add TXHASH to tapscript as well."
>>
>> But that argument basically rests the entire value of CTV on the shoulders of bare CTV. As I understand, the argument for why we want bare CTV, instead of just letting people use tapscript, involves the finer details of weight calculations, and I haven't really reviewed that aspect yet. I think it would need to be pretty compelling to make it worthwhile to add CTV for that one use case.
>>
>> Regarding "OP_TXHASH+CSFSV doesn't seem to be the 'full' set of things needed", I totally agree we will want more things such as CAT, rolling SHA256 opcodes, wider arithmetic, pushing amounts onto the stack, some kind of tapleaf manipulation and/or TWEAKVERIFY. For now, I only want to argue TXHASH+CSFSV is better than CTV+APO because it gives us more value, namely oracle signature verification. In particular, I want to argue that TXHASH's push semantics is better that CTV's verify semantics because it composes better by not needing to carry an extra 32-bytes (per instance) in the witness data. I expect that in a world of full recursive covenants, TXHASH would still be useful as a fast and cheap way to verify the "payload" of these covenants, i.e. that a transaction is paying a certain, possibly large, set of addresses certain specific amounts of money. And even if not, TXHASH+CSFSV would still be the way that eltoo would be implemented under this proposal.
>>
>> On Wed, Jan 26, 2022 at 5:16 PM Jeremy <jlrubin@mit.edu> wrote:
>>
>>> Hi Russell,
>>>
>>> Thanks for this email, it's great to see this approach described.
>>>
>>> A few preliminary notes of feedback:
>>>
>>> 1) a Verify approach can be made to work for OP_TXHASH (even with CTV as-is) E.g., suppose a semantic added for a single byte stack[-1] sighash flag to read the hash at stack[-2], then the hash can be passed in instead of put on the stack. This has the disadvantage of larger witnesses, but the advantage of allowing undefined sighash flags to pass for any hash type.
>>> 2) using the internal key for APO covenants is not an option because it makes transaction construction interactive and precludes contracts with a NUMS point taproot key. Instead, if you want similar savings, you should advocate an OP_GENERATOR which puts G on the stack. Further, an untagged APO variant which has split R and S values would permit something like <sig> OP_GENERATOR OP_GENERATOR CHECKSIGAPO, which would be only 2 more bytes than CTV.
>>> 3) I count something like 20 different flags in your proposal. As long as flags are under 40 bytes (and 32 assuming we want it to be easy) without upgrading math this should be feasible to manipulate on the stack programmatically. This is ignoring some of the more flexible additions you mention about picking which outputs/inputs are included. However, 20 flags means that for testing we would want comprehensive tests and understanding for ~1 million different flag combos and the behaviors they expose. I think this necessitates a formal model of scripting and transaction validity properties. Are there any combinations that might be undesirable?
>>> 4) Just hashing or not hashing isn't actually that flexible, because it doesn't natively let you do things like (for example) TLUV. You really do need tx operations for directly manipulating the data on the stack to construct the hash if you want more flexible covenants. This happens to be compatible with either a Verify or Push approach, since you either destructure a pushed hash or build up a hash for a verify.
>>> 5) Flexible hashing has the potential for quadratic hashing bugs. The fields you propose seem to be within similar range to work you could cause with a regular OP_HASH256, although you'd want to be careful with some of the proposed extensions that you don't create risk of quadratic hashing, which seems possible with an output selecting opcode unless you cache properly (which might be tricky to do). Overall for the fields explicitly mentioned, seems safe, the "possibles" seem to have some more complex interactions. E.g., CTV with the ability to pick a subset of outputs would be exposed to quadratic hashing.
>>> 6) Missing field: covering the annex or some sub-range of the annex (quadratic hashing issues on the latter)
>>> 7) It seems simpler to, for many of these fields, push values directly (as in OP_PUSHTXDATA from Johnson Lau) because the combo of flags to push the hash of a single output's amount to emulate OP_AMOUNT looks 'general but annoying'. It may make more sense to do the OP_PUSHTXDATA style opcode instead. This also makes it simpler to think about the combinations of flags, since it's really N independent multi-byte opcodes.
>>>
>>> Ultimately if we had OP_TXHASH available "tomorrow", I would be able to build out the use cases I care about for CTV (and more). So I don't have an opposition on it with regards to lack of function.
>>>
>>> However, if one finds the TXHASH approach acceptable, then you should also be relatively fine doing APO, CTV, CSFS, TXHASH acceptable in any order (whenever "ready"), unless you are particularly sensitive to "technical debt" and "soft fork processes". The only costs of doing something for CTV or APO given an eventual TXHASH is perhaps a wasted key version or the 32 byte argument of a NOP opcode and some code to maintain.
>>>
>>> Are there other costs I am missing?
>>>
>>> However, as it pertains to actual rollout:
>>>
>>> - OP_TXHASH+CSFSV doesn't seem to be the "full" set of things needed (we still need e.g. OP_CAT, Upgraded >=64 bit Math, TLUV or OP_TWEAK OP_TAPBRANCH OP_MANIPULATETAPTREE, and more) to full realize covenanting power it intends to introduce.
>>> - What sort of timeline would it take to ready something like TXHASH (and desired friends) given greater scope of testing and analysis (standalone + compared to CTV)?
>>> - Is there opposition from the community to this degree of general/recursive covenants?
>>> - Does it make "more sense" to invest the research and development effort that would go into proving TXHASH safe, for example, into Simplicity instead?
>>>
>>> Overall, my opinion is that:
>>>
>>> - TXHASH is an acceptable theoretical approach, and I am happy to put more thought into it and maybe draft a prototype of it.
>>> - I prefer CTV as a first step for pragmatic engineering and availability timeline reasons.
>>> - If TXHASH were to take, optimistically, 2 years to develop and review, and then 1 year to activate, the "path dependence of software" would put Bitcoin in a much better place were we to have CTV within 1 year and applications (that are to be a subset of TXHASH later) being built over the next few years enhanced in the future by TXHASH's availability.
>>> - There is an element of expediency meritted for something like CTV insofar as it provides primitives to tackle time sensitive issues around privacy, scalability, self custody, and decentralization. The aforementioned properties may be difficult to reclaim once given away (with the exception of perhaps scalability).
>>> - Bringing CTV to an implemented state of near-unanimous "we could do this, technically" is good for concretely driving the process of review for any covenant proposals forward, irrespective of if we ultimately activate. (I.e., if there were a reason we could not do CTV safely, it would likely have implications for any other future covenant)
>>>
>>> Concretely, I'm not going to stop advocating for CTV based on the above, but I'm very happy to have something new in the mix to consider!
>>>
>>> Best,
>>>
>>> Jeremy
>>>
>>> --
>>> [@JeremyRubin](https://twitter.com/JeremyRubin)https://twitter.com/JeremyRubin
>>>
>>> On Wed, Jan 26, 2022 at 9:23 AM Russell O'Connor via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
>>>
>>>> Recapping the relationship between CTV and ANYPREVOUT::
>>>>
>>>> It is known that there is a significant amount of overlap in the applications that are enabled by the CTV and ANYPREVOUT proposals despite the fact that their primary applications (congestion control for CTV and eltoo lightning channels for ANYPREVOUT) are quite distinct.
>>>> In particular, ANYPREVOUT can enable most of the applications of CTV, albeit with a higher cost. The primary functionality of CTV is to allow a scriptPubKey to make a commitment to its spending transaction's hash with the input's TXID excluded from the hash. This exclusion is necessary because the scriptPubKey is hashed into the input's TXID, and including the TXID would cause a cycle of hash commitments, which is impossible to construct. On the other hand, ANYPREVOUT defines a signature hash mode that similarly excludes the inputs TXID for its purpose of rebindable signatures.
>>>>
>>>> This means that ANYPREVOUT can mimic most of the properties of CTV by committing both a public key along with an ANYPREVOUT signature inside scriptPubKey. In fact, the only reason Bitcoin doesn't have covenants today is due to this cycle between scriptPubKeys and the TXIDs that occur in all the sighash modes.
>>>>
>>>> The major differences between simulating CTV via ANYPREVOUT and the actual CTV proposal is: (1) The cost of simulating CTV. With CTV the spending transaction is committed using a hash of 32 bytes, while simulating it with ANYPREVOUT requires 64 bytes for a signature, and 32 bytes for some public key, plus a few more bytes for various flags. Some of that cost could be reduced by using the inner public key (1 byte representation) and, if we had CAT, maybe by assembling the signature from reusable pieces (i.e. setting the nonce of the commited signature equal to the public key).
>>>>
>>>> The other major difference is: (2) CTV's transaction hash covers values such as the number of inputs in the transaction and their sequence numbers, which ANYPREVOUT does not cover. CTV's hash contains enough information so that when combined with the missing TXIDs, you can compute the TXID of the spending transaction. In particular if the number of inputs is committed to being 1, once the scriptpubkey's transaction id is known and committed to the blockchain, the TXID of its spending transaction is deducible. And if that transaction has outputs that have CTV commitments in them, you can deduce their spending TXIDs in turn. While this is a pretty neat feature, something that ANYPREVOUT cannot mimic, the main application for it is listed as using congestion control to fund lightning channels, fixing their TXIDs in advance of them being placed on chain. However, if ANYPREVOUT were used to mimic CTV, then likely it would be eltoo channels that would be funded, and it isn't necessary to know the TXIDs of eltoo channels in advance in order to use them.
>>>>
>>>> An Alternative Proposal::
>>>>
>>>> Given the overlap in functionality between CTV and ANYPREVOUT, I think it makes sense to decompose their operations into their constituent pieces and reassemble their behaviour programmatically. To this end, I'd like to instead propose OP_TXHASH and OP_CHECKSIGFROMSTACKVERIFY.
>>>>
>>>> OP_TXHASH would pop a txhash flag from the stack and compute a (tagged) txhash in accordance with that flag, and push the resulting hash onto the stack.
>>>> OP_CHECKSIGFROMSTACKVERIFY would pop a pubkey, message, and signature from the stack and fail if the signature does not verify on that message.
>>>>
>>>> CTV and TXHASH have roughly equivalent functionality. 'CTV DROP' can be simulated by '<ctv_style_flag> TXHASH EQUALVERIFY'. The reverse is also true where '<ctv_style_flag> TXHASH' can be simulated by CTV by '<ctv-result-from-witness-stack> CTV', however, as you can see, simulating TXHASH from CTV is much more expensive than the other way around, because the resulting 32-byte hash result must be included as part of the witness stack.
>>>>
>>>> '<anyprevout-pubkey> CHECKSIGVERIFY can be simulated by '<apo_style_flag> TXHASH <pubkey> CHECKSIGFROMSTACKVERIFY'. Here we see the advantage of pushing the hash value onto the stack. APO can be simulated without needing to include a copy of the resulting txhash inside the witness data.
>>>>
>>>> In addition to the CTV and ANYPREVOUT applications, with CHECKSIGFROMSTACKVERIFY we can verify signatures on arbitrary messages signed by oracles for oracle applications. This is where we see the benefit of decomposing operations into primitive pieces. By giving users the ability to program their own use cases from components, we get more applications out of fewer op codes!
>>>>
>>>> Caveats::
>>>>
>>>> First, I acknowledge that replicating the behaviour of CTV and ANYPREVOUT does cost a few more bytes than using the custom purpose built proposals themselves. That is the price to be paid when we choose the ability to program solutions from pieces. But we get to reap the advantages of being able to build more applications from these pieces.
>>>>
>>>> Unlike CTV, TXHASH is not NOP-compatable and can only be implemented within tapscript. In particular, bare CTV isn't possible with this proposal. However, this proposal doesn't preclude the possibility of having CTV added to legacy script in while having TXHASH added to tapscript.
>>>>
>>>> For similar reasons, TXHASH is not amenable to extending the set of txflags at a later date. In theory, one could have TXHASH abort-with-success when encountering an unknown set of flags. However, this would make analyzing tapscript much more difficult. Tapscripts would then be able to abort with success or failure depending on the order script fragments are assembled and executed, and getting the order incorrect would be catastrophic. This behavior is manifestly different from the current batch of OP_SUCCESS opcodes that abort-with-success just by their mere presence, whether they would be executed or not.
>>>>
>>>> I believe the difficulties with upgrading TXHASH can be mitigated by designing a robust set of TXHASH flags from the start. For example having bits to control whether (1) the version is covered; (2) the locktime is covered; (3) txids are covered; (4) sequence numbers are covered; (5) input amounts are covered; (6) input scriptpubkeys are covered; (7) number of inputs is covered; (8) output amounts are covered; (9) output scriptpubkeys are covered; (10) number of outputs is covered; (11) the tapbranch is covered; (12) the tapleaf is covered; (13) the opseparator value is covered; (14) whether all, one, or no inputs are covered; (15) whether all, one or no outputs are covered; (16) whether the one input position is covered; (17) whether the one output position is covered; (18) whether the sighash flags are covered or not (note: whether or not the sighash flags are or are not covered must itself be covered). Possibly specifying which input or output position is covered in the single case and whether the position is relative to the input's position or is an absolute position.
>>>>
>>>> That all said, even if other txhash flag modes are needed in the future, adding TXHASH2 always remains an option.
>>>>
>>>> Interactions with potential future opcodes::
>>>>
>>>> We should give some consideration on how these opcodes may interact with future opcodes such as CAT, rolling SHA256 opcodes, or how it might interface with other covenant opcodes that may do things like, directly push input or output amounts onto the stack for computation purposes, opcodes which have been added to the Elements project.
>>>>
>>>> With CAT and/or rolling SHA256 opcodes and/or existing SHA256 opcodes, the CHECKSIGFROMSTACKVERIFY could verify signatures on programmatically assembled messages. Also, in combination with multiple calls to TXHASH, could be used to create signatures that commit to complex subsets of transaction data.
>>>>
>>>> If new opcodes are added to push parts of the transaction data direction onto the stack, e.g. OP_INSPECTOUTPUTVALUE, there is perhaps concern that they would obsolete TXHASH, since, in the presence of rolling SHA256 opcodes, TXHASH could be simulated. However, given that TXHASH can compactly create a hash of large portions of transaction data, it seems unlikely that TXHASH would fall into disuse. Also, a combination of TXHASH and transaction introspection opcodes can be used to build "subtractive covenants".
>>>>
>>>> The usual way of building a covenant, which we will call "additive covenants", is to push all the parts of the transaction data you would like to fix onto the stack, hash it all together, and verify the resulting hash matches a fixed value. Another way of building covenants, which we will call "subtractive covenants", is to push all the parts of the transaction data you would like to remain free onto the stack. Then use rolling SHA256 opcodes starting from a fixed midstate that commits to a prefix of the transaction hash data. The free parts are hashed into that midstate. Finally, the resulting hash value is verified to match a value returned by TXHASH. The ability to nicely build subtractive covenants depends on the details of how the TXHASH hash value is constructed, something that I'm told CTV has given consideration to.
>>>> _______________________________________________
>>>> bitcoin-dev mailing list
>>>> bitcoin-dev@lists.linuxfoundation.org
>>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[-- Attachment #2: Type: text/html, Size: 34466 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-01-28 1:34 ` Anthony Towns
@ 2022-01-28 13:56 ` Russell O'Connor
2022-02-01 1:16 ` Anthony Towns
0 siblings, 1 reply; 58+ messages in thread
From: Russell O'Connor @ 2022-01-28 13:56 UTC (permalink / raw)
To: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 5292 bytes --]
On Thu, Jan 27, 2022 at 8:34 PM Anthony Towns <aj@erisian.com.au> wrote:
> > An Alternative Proposal::
> > ...
>
> > For similar reasons, TXHASH is not amenable to extending the set of
> txflags
> > at a later date.
>
> > I believe the difficulties with upgrading TXHASH can be mitigated by
> > designing a robust set of TXHASH flags from the start. For example
> having
> > bits to control whether [...]
>
> I don't think that's really feasible -- eg, what you propose don't cover
> SIGHASH_GROUP:
>
>
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-July/019243.html
>
For more complex interactions, I was imagining combining this TXHASH
proposal with CAT and/or rolling SHA256 opcodes. If TXHASH ended up
supporting relative or absolute input/output indexes then users could
assemble the hashes of the particular inputs and outputs they care about
into a single signed message.
> > That all said, even if other txhash flag modes are needed in the future,
> > adding TXHASH2 always remains an option.
>
> I think baking this in from day 0 might be better: make TXHASH be
> a multibyte opcode, so that when you decode "0xBB" on the stack,
> you also decode a serialize.h:VarInt as the version number.
I wouldn't be opposed to this.
> '<anyprevout-pubkey> CHECKSIGVERIFY can be simulated by '<apo_style_flag>
> TXHASH <pubkey> CHECKSIGFROMSTACKVERIFY'.
>
> I don't think that's quite right. BIP 118 anyprevout is done by taking
> the pubkey "P", marking it as "APO-capable" (by prefixing it with 0x01),
> and then getting a sighash and sig from the witness. Doing the same
> with TXHASH/CSFSV would just be replacing "<APO:P> CHECKSIGVERIFY" with
> "TXHASH <P> CSFSV" with the witness providing both the signature and
> txhash flag, just as separate elements rather than concatenated. (The
> "APO-capable" part is implicit in the "TXHASH" opcode)
>
Indeed. The TXHASH variant does require splitting the signature and txhash
flag across two stack items. So it wouldn't be an operationally identical
drop in replacement.
> > In addition to the CTV and ANYPREVOUT applications, with
> > CHECKSIGFROMSTACKVERIFY we can verify signatures on arbitrary messages
> > signed by oracles for oracle applications. This is where we see the
> > benefit of decomposing operations into primitive pieces. By giving users
> > the ability to program their own use cases from components, we get more
> > applications out of fewer op codes!
>
> While I see the appeal of this from a language design perspective;
> I'm not sure it's really the goal we want. When I look at bitcoin's
> existing script, I see a lot of basic opcodes to do simple arithmetic and
> manipulate the stack in various ways, but the opcodes that are actually
> useful are more "do everything at once" things like check(multi)sig or
> sha256. It seems like what's most useful on the blockchain is a higher
> level language, rather than more of blockchain assembly language made
> up of small generic pieces. I guess "program their own use cases from
> components" seems to be coming pretty close to "write your own crypto
> algorithms" here...
>
Which operations in Script are actually composable today?
CHECKSIG composes with nothing else (other than possibly other CHECKSIGs)
as there are no other operations that manipulate pubkey keys or signature
data.
CLTV and CSV in principle can be composed with addition and subtraction and
comparison operations. But where are you going to get other values to add
and subtract from? I suppose you could compare the relative and absolute
locktimes to each other.
What do the HASH functions compose with? Without CAT you cannot construct
messages to hash. You can hash the result of the arithmetic operations,
but you are limited to hashing 32-bit (or 33-bit if you are generous)
strings, which is too little entropy to have any security properties. You
can hash a public key or a signature I suppose.
I don't think there is much in the way of lessons to be drawn from how we
see Bitcoin Script used today with regards to programs built out of
reusable components. User's haven't been composing programs, not because
they don't find composition useful, but rather because the existing
primitives do not lend themselves to being composed at all.
There is one aspect of Bitcoin Script that is composable, which is
(monotone) boolean combinations of the few primitive transaction conditions
that do exist. The miniscript language captures nearly the entirety of
what is composable in Bitcoin Script today: which amounts to conjunctions,
disjunctions (and thresholds) of signatures, locktimes, and revealing hash
preimages.
TXHASH + CSFSV won't be enough by itself to allow for very interesting
programs Bitcoin Script yet, we still need CAT and friends for that, but
CSFSV is at least a step in that direction. CSFSV can take arbitrary
messages and these messages can be fixed strings, or they can be hashes of
strings (that need to be revealed), or they can be hashes returned from
TXHASH, or they can be locktime values, or they can be values that are
added or subtracted from locktime values, or they can be values used for
thresholds, or they can be other pubkeys for delegation purposes, or they
can be other signatures ... for who knows what purpose.
[-- Attachment #2: Type: text/html, Size: 6964 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-01-28 0:18 ` James O'Beirne
2022-01-28 13:14 ` Michael Folkson
@ 2022-01-28 14:13 ` Russell O'Connor
2022-01-28 15:14 ` James O'Beirne
2022-01-31 2:18 ` Anthony Towns
2 siblings, 1 reply; 58+ messages in thread
From: Russell O'Connor @ 2022-01-28 14:13 UTC (permalink / raw)
To: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 3550 bytes --]
On Thu, Jan 27, 2022 at 7:19 PM James O'Beirne <james.obeirne@gmail.com>
wrote:
> > I don't think implementing a CTV opcode that we expect to largely be
> obsoleted by a TXHASH at a later date is yielding good value from a soft
> fork process.
>
> This presumes the eventual adoption of TXHASH (or something like it).
> You're presenting a novel idea that, as far as I know, hasn't had much time
> to bake in public. Like Jeremy, I'm concerned by the combinatorial growth
> of flags and the implications that has for testing. Caching for something
> like TXHASH looks to me like a whole different ballgame relative to CTV,
> which has a single kind of hash.
>
Let's not overstate the concern around the combinatorics of TXHASH. It's
not like there is a vast amount of cross-flag interaction we are talking
about here. There are also a combinatorial number of ways of assembling
opcodes in Bitcoin script, but we aren't required to exhaustively test
every single possible Script program.
> Even if we were to adopt something like TXHASH, how long is it going to
> take to develop, test, and release? My guess is "a while" - in the
> meantime, users of Bitcoin are without a vault strategy that doesn't
> require either presigning transactions with ephemeral keys (operationally
> difficult) or multisig configurations that would make Rube Goldberg blush
> (operationally difficult and precarious). The utility of vaulting seems
> underappreciated among consensus devs and it's something I'd like to write
> about soon in a separate post.
>
> > The strongest argument I can make in favour of CTV would be something
> like: "We definitely want bare CTV and if we are going to add CTV to legacy
> script (since we cannot use TXHASH in legacy script), then it is actually
> easier not to exclude it from tapscript, even if we plan to add TXHASH to
> tapscript as well."
>
> Another argument for CTV (which I find especially persuasive) is its
> simplicity - it's relatively easy to reason about and, at this point,
> pretty well understood. It seems like a low-risk change relative to some of
> the other covenant proposals, nearly all of which elicit a good deal of
> headscratching (at least from me) and seem to require not only larger
> on-chain footprints but sizable code changes.
>
> > I am sensitive to technical debt and soft fork processes
>
> If OP_CTV ends up being the most practical approach for vaulting - among
> other things - in terms of weight (which it seems to be at the moment) I
> don't think "technical debt" is an applicable term.
>
Technical debt isn't a measure of weight of transactions. It's a measure
of the code complexity needed to implement, in this case, a Bitcoin Script
interpreter.
By itself, adding a single new hash format for CTV isn't that complex, and
it is certainly simpler than this TXHASH proposal. But then we need to add
another two slightly different hash formats for APO support. And tomorrow
we will need yet another set of transaction hash formats for the next
thing, and so on, with each instance requiring going through its own
soft-fork process. It is at that point we end up with something more
complicated and with more deployment risk than if we had just done
something like TXHASH at the very beginning. But unlike other programming
environments, we cannot refactor our way out of such a situation. We
cannot make a new script version while deprecating the old one. Our only
option here is to be mindful of the long term implications of the design
choices we are making today.
[-- Attachment #2: Type: text/html, Size: 4684 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-01-28 13:14 ` Michael Folkson
@ 2022-01-28 14:17 ` Anthony Towns
2022-01-28 16:38 ` Jeremy
0 siblings, 1 reply; 58+ messages in thread
From: Anthony Towns @ 2022-01-28 14:17 UTC (permalink / raw)
To: Michael Folkson, Bitcoin Protocol Discussion
On Fri, Jan 28, 2022 at 01:14:07PM +0000, Michael Folkson via bitcoin-dev wrote:
> There is not even a custom signet with CTV (as far as I know)
https://twitter.com/jeremyrubin/status/1339699281192656897
signetchallenge=512102946e8ba8eca597194e7ed90377d9bbebc5d17a9609ab3e35e706612ee882759351ae
addnode=50.18.75.225
But I think there's only been a single coinbase consolidation tx, and no
actual CTV transactions?
Cheers,
aj
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-01-28 14:13 ` Russell O'Connor
@ 2022-01-28 15:14 ` James O'Beirne
2022-01-29 15:43 ` Russell O'Connor
0 siblings, 1 reply; 58+ messages in thread
From: James O'Beirne @ 2022-01-28 15:14 UTC (permalink / raw)
To: Russell O'Connor, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 1429 bytes --]
> Technical debt isn't a measure of weight of transactions.
Sorry, my original sentence was a little unclear. I meant to say that the
notion that CTV is just a subpar waypoint en route to a more general
covenant system may not be accurate if it is a more efficient way (in terms
of chainstate/weight) to express highly useful patterns like vaults. In
that case, characterizing CTV as technical debt wouldn't be right.
> Our only option here is to be mindful of the long term implications of
the design choices we are making today.
Your points are well taken - I don't think anyone is arguing against
thinking hard about consensus changes. But I have yet to see a proposal for
covenants that is as efficient on-chain and easy to reason about as CTV is.
I also think there's some value in "legging into" covenants by deploying a
simple, non-recursive construction like CTV that services some very
important uses, and then taking as much time as necessary to think about
how to solve more existential problems, like UTXO scalability, that likely
require a recursive covenant construction.
There doesn't have to be mutual exclusion in the approaches, especially
when the maintenance burden of CTV seems to be so low. If we end up
deploying something that requires a wider variety of in-script hashing, it
seems likely that CTV's hash will be able to "free ride" on whatever more
general sighash cache structure we come up with.
[-- Attachment #2: Type: text/html, Size: 1671 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-01-28 14:17 ` Anthony Towns
@ 2022-01-28 16:38 ` Jeremy
0 siblings, 0 replies; 58+ messages in thread
From: Jeremy @ 2022-01-28 16:38 UTC (permalink / raw)
To: Anthony Towns, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 1527 bytes --]
I probably need to reset it -- I ran into some issues with the IBD latch
bug IIRC and had difficulty producing new blocks.
I sent funds as a manual faucet to at least one person... not aware of
anyone else finding use for the signet. In part this is due to the fact
that in order to run a signet, you also kind of need to run some kind of
faucet on it, which wasn't readily available when I launched it previously.
I think I can use https://github.com/jsarenik/bitcoin-faucet-shell now
though.
Usually people are using Regtest to play around with CTV less so Signet.
There is value in a signet, but I don't think that "there's not a signet
for it" is a blocking issue v.s. nice to have.
--
@JeremyRubin <https://twitter.com/JeremyRubin>
<https://twitter.com/JeremyRubin>
On Fri, Jan 28, 2022 at 6:18 AM Anthony Towns via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> On Fri, Jan 28, 2022 at 01:14:07PM +0000, Michael Folkson via bitcoin-dev
> wrote:
> > There is not even a custom signet with CTV (as far as I know)
>
> https://twitter.com/jeremyrubin/status/1339699281192656897
>
>
> signetchallenge=512102946e8ba8eca597194e7ed90377d9bbebc5d17a9609ab3e35e706612ee882759351ae
> addnode=50.18.75.225
>
> But I think there's only been a single coinbase consolidation tx, and no
> actual CTV transactions?
>
> Cheers,
> aj
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 3028 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-01-28 15:14 ` James O'Beirne
@ 2022-01-29 15:43 ` Russell O'Connor
2022-01-29 17:02 ` Jeremy Rubin
[not found] ` <CAD5xwhjHv2EGYb33p2MRS=VSz=ciGwAsiafX1yRHjxQEXfykSA@mail.gmail.com>
0 siblings, 2 replies; 58+ messages in thread
From: Russell O'Connor @ 2022-01-29 15:43 UTC (permalink / raw)
To: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 2936 bytes --]
On Fri, Jan 28, 2022 at 10:14 AM James O'Beirne <james.obeirne@gmail.com>
wrote:
> > Technical debt isn't a measure of weight of transactions.
>
> Sorry, my original sentence was a little unclear. I meant to say that the
> notion that CTV is just a subpar waypoint en route to a more general
> covenant system may not be accurate if it is a more efficient way (in terms
> of chainstate/weight) to express highly useful patterns like vaults. In
> that case, characterizing CTV as technical debt wouldn't be right.
>
It only costs a few more weight units, on the order of 2 or 3, to use
TXHASH in place of CTV. Notably, the reverse, using CTV in place of
TXHASH, is much more expensive, requiring more than 32 weight units.
> > Our only option here is to be mindful of the long term implications of
> the design choices we are making today.
>
> Your points are well taken - I don't think anyone is arguing against
> thinking hard about consensus changes. But I have yet to see a proposal for
> covenants that is as efficient on-chain and easy to reason about as CTV is.
>
> I also think there's some value in "legging into" covenants by deploying a
> simple, non-recursive construction like CTV that services some very
> important uses, and then taking as much time as necessary to think about
> how to solve more existential problems, like UTXO scalability, that likely
> require a recursive covenant construction.
>
> There doesn't have to be mutual exclusion in the approaches, especially
> when the maintenance burden of CTV seems to be so low. If we end up
> deploying something that requires a wider variety of in-script hashing, it
> seems likely that CTV's hash will be able to "free ride" on whatever more
> general sighash cache structure we come up with.
>
Perhaps there is some misunderstanding. TXHASH + CSFSV doesn't allow for
complex or recursive covenants. Typically CAT is needed, at minimum, to
create those sorts of things. TXHASH still amounts to deploying a
non-recursive covenant construction.
With regards to CTV, in short my primary criticisms are (1) Push semantics
is preferable to verify semantics, because simulating verify semantics from
push is cheap, while simulating push semantics from verify is not
particularly cheap.
And (2) given Push semantics we ought to have parameters to support both
CTV-style hashes and APO-style hashes (which in the presence of CSFSV gives
us APO applications), and, while we are at it, as many other style hashes
as we can reasonably devise so we don't have to go through yet another
soft-fork process every time someone comes up with a new subset of
transaction data they would like to be hashed for their application.
I understand why CTV was designed with verify semantics: it would like to
be NOP compatible. That certainly made sense pre-tapscript. I just
haven't (yet) found the use cases for that compatibility to be compelling
in a post-tapscript world.
[-- Attachment #2: Type: text/html, Size: 3715 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-01-29 15:43 ` Russell O'Connor
@ 2022-01-29 17:02 ` Jeremy Rubin
[not found] ` <CAD5xwhjHv2EGYb33p2MRS=VSz=ciGwAsiafX1yRHjxQEXfykSA@mail.gmail.com>
1 sibling, 0 replies; 58+ messages in thread
From: Jeremy Rubin @ 2022-01-29 17:02 UTC (permalink / raw)
To: Russell O'Connor, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 489 bytes --]
Perhaps there is some misunderstanding. TXHASH + CSFSV doesn't allow for
complex or recursive covenants. Typically CAT is needed, at minimum, to
create those sorts of things. TXHASH still amounts to deploying a
non-recursive covenant construction.
This seems false to me.
<Only hash a single input scriptpubkey> txhash <only hash a single output
scriptpubkey> txhash equalverify
Is that not a recursive covenant? With a little extra work you can also
control for amounts and stuff.
[-- Attachment #2: Type: text/html, Size: 1371 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
[not found] ` <CAD5xwhjHv2EGYb33p2MRS=VSz=ciGwAsiafX1yRHjxQEXfykSA@mail.gmail.com>
@ 2022-01-29 17:14 ` Russell O'Connor
0 siblings, 0 replies; 58+ messages in thread
From: Russell O'Connor @ 2022-01-29 17:14 UTC (permalink / raw)
To: Jeremy Rubin; +Cc: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 928 bytes --]
The hash would normally also cover the hash flags in use, and would be
different in those two cases.
But yes, it seems at the last minute I did include a suggestion to disable
covering the flag themselves in the hash and appear to have accidentally
allowed for recursive covenants (a common occurrence when designing
opcodes).
On Sat, Jan 29, 2022 at 12:01 PM Jeremy Rubin <j@rubin.io> wrote:
>
>
>
>> Perhaps there is some misunderstanding. TXHASH + CSFSV doesn't allow for
>> complex or recursive covenants. Typically CAT is needed, at minimum, to
>> create those sorts of things. TXHASH still amounts to deploying a
>> non-recursive covenant construction.
>>
>>
> This seems false to me.
>
> <Only hash a single input scriptpubkey> txhash <only hash a single output
> scriptpubkey> txhash equalverify
>
> Is that not a recursive covenant? With a little extra work you can also
> control for amounts and stuff.
>
>
>
>>
[-- Attachment #2: Type: text/html, Size: 2542 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-01-28 0:18 ` James O'Beirne
2022-01-28 13:14 ` Michael Folkson
2022-01-28 14:13 ` Russell O'Connor
@ 2022-01-31 2:18 ` Anthony Towns
2 siblings, 0 replies; 58+ messages in thread
From: Anthony Towns @ 2022-01-31 2:18 UTC (permalink / raw)
To: James O'Beirne, Bitcoin Protocol Discussion
On Thu, Jan 27, 2022 at 07:18:54PM -0500, James O'Beirne via bitcoin-dev wrote:
> > I don't think implementing a CTV opcode that we expect to largely be
> > obsoleted by a TXHASH at a later date is yielding good value from a soft
> > fork process.
> Caching for something
> like TXHASH looks to me like a whole different ballgame relative to CTV,
> which has a single kind of hash.
I don't think caching is a particular problem even for the plethora of
flags Russell described: you cache each value upon use, and reuse that
cached item if it's needed for other signatures within the tx; sharing
with BIP 143, 341 or 342 signatures as appropriate. Once everything's
cached, each signature then only requires hashing about 32*17+4 = ~548
bytes, and you're only hashing each part of the transaction once in
order to satisfy every possible flag.
> Even if we were to adopt something like TXHASH, how long is it going to
> take to develop, test, and release?
I think the work to release something like TXHASH is all in deciding:
- if TXHASH or CTV or something else is the better "UX"
- what is a good tx to message algorithm and how it should be
parametized
- what's an appropriate upgrade path for the TXHASH/CTV/??? mechanism
BIP 119 provides one answer to each of those, but you still have to do
the work to decide if its a *good* answer to each of them.
> My guess is "a while" -
If we want to get a good answer to those questions, it might be true
that it takes a while; but even if we want to rush ahead with more of
a "well, we're pretty sure it's not going to be a disaster" attitude,
we can do that with TXHASH (almost) as easily as with CTV.
> The utility of vaulting seems
> underappreciated among consensus devs and it's something I'd like to write
> about soon in a separate post.
I think most of the opposition is just that support for CTV seems to be
taking the form "something must be done; this is something, therefore
it must be done"...
I'd be more comfortable if the support looked more like "here are the
alternatives to CTV, and here's the advantages and drawbacks for each,
here's how they interact with other ideas, and here's why we think,
on balance, we think this approach is the best one". But mostly the
alternatives are dismissed with "this will take too long" or "this enables
recursive covenants which someone (we don't know who) might oppose".
Cheers,
aj
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-01-28 13:56 ` Russell O'Connor
@ 2022-02-01 1:16 ` Anthony Towns
2022-02-08 2:16 ` Russell O'Connor
0 siblings, 1 reply; 58+ messages in thread
From: Anthony Towns @ 2022-02-01 1:16 UTC (permalink / raw)
To: Russell O'Connor, Bitcoin Protocol Discussion
On Fri, Jan 28, 2022 at 08:56:25AM -0500, Russell O'Connor via bitcoin-dev wrote:
> > https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-July/019243.html
> For more complex interactions, I was imagining combining this TXHASH
> proposal with CAT and/or rolling SHA256 opcodes. If TXHASH ended up
> supporting relative or absolute input/output indexes then users could
> assemble the hashes of the particular inputs and outputs they care about
> into a single signed message.
That's certainly possible, but it sure seems overly complicated and
error prone...
> > While I see the appeal of this from a language design perspective;
> > I'm not sure it's really the goal we want. When I look at bitcoin's
> > existing script, I see a lot of basic opcodes to do simple arithmetic and
> > manipulate the stack in various ways, but the opcodes that are actually
> > useful are more "do everything at once" things like check(multi)sig or
> > sha256. It seems like what's most useful on the blockchain is a higher
> > level language, rather than more of blockchain assembly language made
> > up of small generic pieces. I guess "program their own use cases from
> > components" seems to be coming pretty close to "write your own crypto
> > algorithms" here...
> Which operations in Script are actually composable today?
> There is one aspect of Bitcoin Script that is composable, which is
> (monotone) boolean combinations of the few primitive transaction conditions
> that do exist. The miniscript language captures nearly the entirety of
> what is composable in Bitcoin Script today: which amounts to conjunctions,
> disjunctions (and thresholds) of signatures, locktimes, and revealing hash
> preimages.
Yeah; I think miniscript captures everything bitcion script is actually
useful for today, and if we were designing bitcoin from scratch and
had known that was the feature set we were going to end up with, we'd
have come up with something simpler and a fair bit more high level than
bitcoin script for the interpreter.
> I don't think there is much in the way of lessons to be drawn from how we
> see Bitcoin Script used today with regards to programs built out of
> reusable components.
I guess I think one conclusion we should draw is some modesty in how
good we are at creating general reusable components. That is, bitcoin
script looks a lot like a relatively general expression language,
that should allow you to write interesting things; but in practice a
lot of it was buggy (OP_VER hardforks and resource exhaustion issues),
or not powerful enough to actually be interesting, or too complicated
to actually get enough use out of [0].
> TXHASH + CSFSV won't be enough by itself to allow for very interesting
> programs Bitcoin Script yet, we still need CAT and friends for that,
"CAT" and "CHECKSIGFROMSTACK" are both things that have been available in
elements for a while; has anyone managed to build anything interesting
with them in practice, or are they only useful for thought experiments
and blog posts? To me, that suggests that while they're useful for
theoretical discussion, they don't turn out to be a good design in
practice.
> but
> CSFSV is at least a step in that direction. CSFSV can take arbitrary
> messages and these messages can be fixed strings, or they can be hashes of
> strings (that need to be revealed), or they can be hashes returned from
> TXHASH, or they can be locktime values, or they can be values that are
> added or subtracted from locktime values, or they can be values used for
> thresholds, or they can be other pubkeys for delegation purposes, or they
> can be other signatures ... for who knows what purpose.
I mean, if you can't even think of a couple of uses, that doesn't seem
very interesting to pursue in the near term? CTV has something like half
a dozen fairly near-term use cases, but obviously those can all be done
just with TXHASH without a need for CSFS, and likewise all the ANYPREVOUT
things can obviously be done via CHECKSIG without either TXHASH or CSFS...
To me, the point of having CSFS (as opposed to CHECKSIG) seems to be
verifying that an oracle asserted something; but for really simply boolean
decisions, doing that via a DLC seems better in general since that moves
more of the work off-chain; and for the case where the signature is being
used to authenticate input into the script rather than just gating a path,
that feels a bit like a weaker version of graftroot?
I guess I'd still be interested in the answer to:
> > If we had CTV, POP_SIGDATA, and SIGHASH_NO_TX_DATA_AT_ALL but no OP_CAT,
> > are there any practical use cases that wouldn't be covered that having
> > TXHASH/CAT/CHECKSIGFROMSTACK instead would allow? Or where those would
> > be significantly more convenient/efficient?
> >
> > (Assume "y x POP_SIGDATA POP_SIGDATA p CHECKSIGVERIFY q CHECKSIG"
> > commits to a vector [x,y] via p but does not commit to either via q so
> > that there's some "CAT"-like behaviour available)
TXHASH seems to me to be clearly the more flexible opcode compared to
CTV; but maybe all that flexibility is wasted, and all the real use
cases actually just want CHECKSIG or CTV? I'd feel much better having
some idea of what the advantage of being flexible there is...
But all that aside, probably the real question is can we simplify CTV's
transaction message algorithm, if we assume APO is enabled simultaneously?
If it doesn't get simplified and needs its own hashing algorithm anyway,
that would be probably be a good reason to keep the separate.
First, since ANYPREVOUT commits to the scriptPubKey, you'd need to use
ANYPREVOUTANYSCRIPT for CTV-like behaviour.
ANYPRVOUTANYSCRIPT is specced as commiting to:
nVersion
nLockTime
nSequence
spend_type and annex present
sha_annex (if present)
sha_outputs (ALL) or sha_single_output (SINGLE)
key_version
codesep_pos
CTV commits to:
nVersion
nLockTime
scriptSig hash "(maybe!)"
input count
sequences hash
output count
outputs hash
input index
(CTV thus allows annex malleability, since it neither commits to the
annex nor forbids inclusion of an annex)
"output count" and "outputs index" would both be covered by sha_outputs
with ANYPREVOUTANYSCRIPT|ALL.
I think "scriptSig hash" is only covered to avoid txid malleability; but
just adjusting your protocol to use APO signatures instead of relying on
the txid of future transactions also solves that problem.
I believe "sequences hash", "input count" and "input index" are all an
important part of ensuring that if you have two UTXOs distributing 0.42
BTC to the same set of addresses via CTV, that you can't combine them in a
single transaction and end up sending losing one of the UTXOs to fees. I
don't believe there's a way to resolve that with bip 118 alone, however
that does seem to be a similar problem to the one that SIGHASH_GROUP
tries to solve.
SIGHASH_GROUP [1] would be an alternative to ALL/SINGLE/NONE, with the exact
group of outputs being committed to determined via the annex.
ANYPREVOUTANYSCRIPT|GROUP would commit to:
nVersion
nLockTime
nSequence
spend_type and annex present
sha_annex (if present)
sha_group_outputs (GROUP)
key_version
codesep_pos
So in that case if you have your two inputs:
0.42 [pays 0.21 to A, 0.10 to B, 0.10 to C]
0.42 [pays 0.21 to A, 0.10 to B, 0.10 to C]
then, either:
a) if they're both committed with GROUP and sig_group_count = 3, then
the outputs must be [0.21 A, 0.10 B, 0.10 C, 0.21 A, 0.10 B, 0.10
C], and you don't lose funds
b) if they're both committed with GROUP and the first is
sig_group_count=3 and the second is sig_group_count=0, then the
outputs can be [0.21 A, 0.10 B, 0.10 C, *anything] -- but in that
case the second input is already signalling that it's meant to be
paired with another input to fund the same three outputs, so any
funds loss is at least intentional
Note that this means txids are very unstable: if a tx is only protected
by SIGHASH_GROUP commitments then miners/relayers can add outputs, or
reorganise the groups without making the tx invalid. Beyond requiring
the signatures to be APO/APOAS-based to deal with that, we'd also need
to avoid txs getting rbf-pinned by some malicious third party who pulls
apart the groups and assembles a new tx that's hard to rbf but also
unlikely to confirm due to having a low feerate.
Note also that not reusing addresses solves this case -- it's only a
problem when you're paying the same amounts to the same addresses.
Being able to combine additional inputs and outputs at a later date
(which necessarily changes the txid) is an advantage though: it lets
you add additional funds and claim change, which allows you to adjust
to different fee rates.
I don't think the SIGHASH_GROUP approach would work very well without
access to the annex, ie if you're trying to do CTV encoded either in a
plain scriptPubKey or via segwit/p2sh.
I think that would give 16 different sighashes, choosing one of four
options for outputs,
ALL/NONE/SINGLE/GROUP
-- which outputs are committed to
and one of four options for inputs,
-/ANYONECANPAY/ANYPREVOUT/ANYPREVOUTANYSCRIPT
-- all inputs committed to, specific input committed to,
scriptpubkey/tapscript committed to, or just the
nseq/annex/codesep_pos
vs the ~155,000 sighashes in the TXHASH proposal.
I don't think there's an efficient way of doing SIGHASH_GROUP via tx
introspection opcodes that doesn't also introduce a quadratic hashing
risk -- you need to prevent different inputs from re-hashing distinct but
overlapping sets of outputs, and if your opcodes only allow grabbing one
output at a time to add to the message being signed you have to do a lot
of coding if you want to let the signer choose how many outputs to commit
to; if you provide an opcode that grabs man outputs to hash, it seems
hard to do that generically in a way that avoids quadratic behaviour.
So I think that suggests two alternative approaches, beyond the
VERIFY-vs-PUSH semantic:
- have a dedicated sighash type for CTV (either an explicit one for it,
per bip119, or support thousands of options like the proposal in this
thread, one of which happens to be about the same as the bip119 idea)
- use ANYPREVOUTANYSCRIPT|GROUP for CTV, which means also implementing
annex parsing and better RBF behaviour to avoid those txs being
excessively vulnerable to pinning; with the advantage being that
txs using "GROUP" sigs can be combined either for batching purposes
or for adapting to the fee market after the signature has been made,
and the disadvantage that you can't rely on stable txids when looking
for CTV spends and have to continue using APO/APOAS when chaining
signatures on top of unconfirmed CTV outputs
Cheers,
aj
[0] Here's bitmatrix trying to multiply two numbers together:
https://medium.com/bit-matrix/technical-how-does-bitmatrix-v1-multiply-two-integers-in-the-absence-of-op-mul-a58b7a3794a3
Likewise, doing a point preimage reveal via clever scripting
pre-taproot never saw an implementation, despite seeming
theoretically plausible.
https://lists.linuxfoundation.org/pipermail/lightning-dev/2015-November/000344.html
[1] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-July/019243.html
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-01 1:16 ` Anthony Towns
@ 2022-02-08 2:16 ` Russell O'Connor
2022-02-17 14:27 ` Anthony Towns
0 siblings, 1 reply; 58+ messages in thread
From: Russell O'Connor @ 2022-02-08 2:16 UTC (permalink / raw)
To: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 13128 bytes --]
On Mon, Jan 31, 2022 at 8:16 PM Anthony Towns <aj@erisian.com.au> wrote:
> On Fri, Jan 28, 2022 at 08:56:25AM -0500, Russell O'Connor via bitcoin-dev
> wrote:
> > >
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-July/019243.html
> > For more complex interactions, I was imagining combining this TXHASH
> > proposal with CAT and/or rolling SHA256 opcodes. If TXHASH ended up
> > supporting relative or absolute input/output indexes then users could
> > assemble the hashes of the particular inputs and outputs they care about
> > into a single signed message.
>
> That's certainly possible, but it sure seems overly complicated and
> error prone...
>
Indeed, and we really want something that can be programmed at redemption
time.
That probably involves something like how the historic MULTISIG worked by
having list of input / output indexes be passed in along with length
arguments.
I don't think there will be problems with quadratic hashing here because as
more inputs are list, the witness in turns grows larger itself. The amount
of stack elements that can be copied is limited by a constant (3DUP).
Certainly care is needed here, but also keep in mind that an OP_HASH256
does a double hash and costs one weight unit.
That said, your SIGHASH_GROUP proposal suggests that some sort of
intra-input communication is really needed, and that is something I would
need to think about.
While normally I'd be hesitant about this sort of feature creep, when we
are talking about doing soft-forks, I really think it makes sense to think
through these sorts of issues (as we are doing here).
> > I don't think there is much in the way of lessons to be drawn from how we
> > see Bitcoin Script used today with regards to programs built out of
> > reusable components.
>
> I guess I think one conclusion we should draw is some modesty in how
> good we are at creating general reusable components. That is, bitcoin
> script looks a lot like a relatively general expression language,
> that should allow you to write interesting things; but in practice a
> lot of it was buggy (OP_VER hardforks and resource exhaustion issues),
> or not powerful enough to actually be interesting, or too complicated
> to actually get enough use out of [0].
>
> TXHASH + CSFSV won't be enough by itself to allow for very interesting
> > programs Bitcoin Script yet, we still need CAT and friends for that,
>
> "CAT" and "CHECKSIGFROMSTACK" are both things that have been available in
> elements for a while; has anyone managed to build anything interesting
> with them in practice, or are they only useful for thought experiments
> and blog posts? To me, that suggests that while they're useful for
> theoretical discussion, they don't turn out to be a good design in
> practice.
>
Perhaps the lesson to be drawn is that languages should support multiplying
two numbers together.
Having 2/3rd of the language you need to write interesting programs doesn't
mean that you get 2/3rd of the interesting programs written.
But beyond that, there is a lot more to a smart contract than just the
Script. Dmitry Petukhov has a fleshed out design for Asset based lending
on liquid at https://ruggedbytes.com/articles/ll/, despite the limitations
of (pre-taproot) Elements Script. But to make it a real thing you need
infrastructure for working with partial transactions, key management, etc.
> but
> > CSFSV is at least a step in that direction. CSFSV can take arbitrary
> > messages and these messages can be fixed strings, or they can be hashes
> of
> > strings (that need to be revealed), or they can be hashes returned from
> > TXHASH, or they can be locktime values, or they can be values that are
> > added or subtracted from locktime values, or they can be values used for
> > thresholds, or they can be other pubkeys for delegation purposes, or they
> > can be other signatures ... for who knows what purpose.
>
> I mean, if you can't even think of a couple of uses, that doesn't seem
> very interesting to pursue in the near term? CTV has something like half
> a dozen fairly near-term use cases, but obviously those can all be done
> just with TXHASH without a need for CSFS, and likewise all the ANYPREVOUT
> things can obviously be done via CHECKSIG without either TXHASH or CSFS...
>
> To me, the point of having CSFS (as opposed to CHECKSIG) seems to be
> verifying that an oracle asserted something; but for really simply boolean
> decisions, doing that via a DLC seems better in general since that moves
> more of the work off-chain; and for the case where the signature is being
> used to authenticate input into the script rather than just gating a path,
> that feels a bit like a weaker version of graftroot?
>
I didn't really mean this as a list of applications; it was a list of
values that CSFSV composes with. Applications include delegation of pubkeys
and oracles, and, in the presence of CAT and transaction reflection
primitives, presumably many more things.
> I guess I'd still be interested in the answer to:
>
> > > If we had CTV, POP_SIGDATA, and SIGHASH_NO_TX_DATA_AT_ALL but no
> OP_CAT,
> > > are there any practical use cases that wouldn't be covered that having
> > > TXHASH/CAT/CHECKSIGFROMSTACK instead would allow? Or where those would
> > > be significantly more convenient/efficient?
> > >
> > > (Assume "y x POP_SIGDATA POP_SIGDATA p CHECKSIGVERIFY q CHECKSIG"
> > > commits to a vector [x,y] via p but does not commit to either via q so
> > > that there's some "CAT"-like behaviour available)
>
I don't know if this is the answer you are looking for, but technically
TXHASH + CAT + SHA256 awkwardly gives you limited transaction reflection.
In fact, you might not even need TXHASH, though it certainly helps.
> TXHASH seems to me to be clearly the more flexible opcode compared to
> CTV; but maybe all that flexibility is wasted, and all the real use
> cases actually just want CHECKSIG or CTV? I'd feel much better having
> some idea of what the advantage of being flexible there is...
>
The flexibility of TXHASH is intended to head off the need for future soft
forks. If we had specific applications in mind, we could simply set up the
transaction hash flags to cover all the applications we know about. But it
is the applications that we don't know about that worry me. If we don't
put options in place with this soft-fork proposal, then they will need
their own soft-fork down the line; and the next application after that, and
so on.
If our attitude is to craft our soft-forks as narrowly as possible to limit
them to what only allows for given tasks, then we are going to end up
needing a lot more soft-forks, and that is not a good outcome.
But all that aside, probably the real question is can we simplify CTV's
> transaction message algorithm, if we assume APO is enabled simultaneously?
> If it doesn't get simplified and needs its own hashing algorithm anyway,
> that would be probably be a good reason to keep the separate.
>
> First, since ANYPREVOUT commits to the scriptPubKey, you'd need to use
> ANYPREVOUTANYSCRIPT for CTV-like behaviour.
>
> ANYPRVOUTANYSCRIPT is specced as commiting to:
> nVersion
> nLockTime
> nSequence
> spend_type and annex present
> sha_annex (if present)
> sha_outputs (ALL) or sha_single_output (SINGLE)
> key_version
> codesep_pos
>
> CTV commits to:
> nVersion
> nLockTime
> scriptSig hash "(maybe!)"
> input count
> sequences hash
> output count
> outputs hash
> input index
>
> (CTV thus allows annex malleability, since it neither commits to the
> annex nor forbids inclusion of an annex)
>
> "output count" and "outputs index" would both be covered by sha_outputs
> with ANYPREVOUTANYSCRIPT|ALL.
>
> I think "scriptSig hash" is only covered to avoid txid malleability; but
> just adjusting your protocol to use APO signatures instead of relying on
> the txid of future transactions also solves that problem.
>
> I believe "sequences hash", "input count" and "input index" are all an
> important part of ensuring that if you have two UTXOs distributing 0.42
> BTC to the same set of addresses via CTV, that you can't combine them in a
> single transaction and end up sending losing one of the UTXOs to fees. I
> don't believe there's a way to resolve that with bip 118 alone, however
> that does seem to be a similar problem to the one that SIGHASH_GROUP
> tries to solve.
>
It was my understanding that it is only "input count = 1" that prevents
this issue.
SIGHASH_GROUP [1] would be an alternative to ALL/SINGLE/NONE, with the exact
> group of outputs being committed to determined via the annex.
> ANYPREVOUTANYSCRIPT|GROUP would commit to:
>
> nVersion
> nLockTime
> nSequence
> spend_type and annex present
> sha_annex (if present)
> sha_group_outputs (GROUP)
> key_version
> codesep_pos
>
> So in that case if you have your two inputs:
>
> 0.42 [pays 0.21 to A, 0.10 to B, 0.10 to C]
> 0.42 [pays 0.21 to A, 0.10 to B, 0.10 to C]
>
> then, either:
>
> a) if they're both committed with GROUP and sig_group_count = 3, then
> the outputs must be [0.21 A, 0.10 B, 0.10 C, 0.21 A, 0.10 B, 0.10
> C], and you don't lose funds
>
> b) if they're both committed with GROUP and the first is
> sig_group_count=3 and the second is sig_group_count=0, then the
> outputs can be [0.21 A, 0.10 B, 0.10 C, *anything] -- but in that
> case the second input is already signalling that it's meant to be
> paired with another input to fund the same three outputs, so any
> funds loss is at least intentional
>
> Note that this means txids are very unstable: if a tx is only protected
> by SIGHASH_GROUP commitments then miners/relayers can add outputs, or
> reorganise the groups without making the tx invalid. Beyond requiring
> the signatures to be APO/APOAS-based to deal with that, we'd also need
> to avoid txs getting rbf-pinned by some malicious third party who pulls
> apart the groups and assembles a new tx that's hard to rbf but also
> unlikely to confirm due to having a low feerate.
>
> Note also that not reusing addresses solves this case -- it's only a
> problem when you're paying the same amounts to the same addresses.
>
> Being able to combine additional inputs and outputs at a later date
> (which necessarily changes the txid) is an advantage though: it lets
> you add additional funds and claim change, which allows you to adjust
> to different fee rates.
>
> I don't think the SIGHASH_GROUP approach would work very well without
> access to the annex, ie if you're trying to do CTV encoded either in a
> plain scriptPubKey or via segwit/p2sh.
>
> I think that would give 16 different sighashes, choosing one of four
> options for outputs,
>
> ALL/NONE/SINGLE/GROUP
> -- which outputs are committed to
>
> and one of four options for inputs,
>
> -/ANYONECANPAY/ANYPREVOUT/ANYPREVOUTANYSCRIPT
> -- all inputs committed to, specific input committed to,
> scriptpubkey/tapscript committed to, or just the
> nseq/annex/codesep_pos
>
> vs the ~155,000 sighashes in the TXHASH proposal.
>
> I don't think there's an efficient way of doing SIGHASH_GROUP via tx
> introspection opcodes that doesn't also introduce a quadratic hashing
> risk -- you need to prevent different inputs from re-hashing distinct but
> overlapping sets of outputs, and if your opcodes only allow grabbing one
> output at a time to add to the message being signed you have to do a lot
> of coding if you want to let the signer choose how many outputs to commit
> to; if you provide an opcode that grabs man outputs to hash, it seems
> hard to do that generically in a way that avoids quadratic behaviour.
>
> So I think that suggests two alternative approaches, beyond the
> VERIFY-vs-PUSH semantic:
>
> - have a dedicated sighash type for CTV (either an explicit one for it,
> per bip119, or support thousands of options like the proposal in this
> thread, one of which happens to be about the same as the bip119 idea)
>
> - use ANYPREVOUTANYSCRIPT|GROUP for CTV, which means also implementing
> annex parsing and better RBF behaviour to avoid those txs being
> excessively vulnerable to pinning; with the advantage being that
> txs using "GROUP" sigs can be combined either for batching purposes
> or for adapting to the fee market after the signature has been made,
> and the disadvantage that you can't rely on stable txids when looking
> for CTV spends and have to continue using APO/APOAS when chaining
> signatures on top of unconfirmed CTV outputs
>
> Cheers,
> aj
>
> [0] Here's bitmatrix trying to multiply two numbers together:
>
> https://medium.com/bit-matrix/technical-how-does-bitmatrix-v1-multiply-two-integers-in-the-absence-of-op-mul-a58b7a3794a3
>
> Likewise, doing a point preimage reveal via clever scripting
> pre-taproot never saw an implementation, despite seeming
> theoretically plausible.
>
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2015-November/000344.html
>
> [1]
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-July/019243.html
>
>
[-- Attachment #2: Type: text/html, Size: 16539 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-01-26 17:20 [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT Russell O'Connor
2022-01-26 22:16 ` Jeremy
2022-01-28 1:34 ` Anthony Towns
@ 2022-02-08 3:40 ` Rusty Russell
2022-02-08 4:34 ` Jeremy Rubin
2 siblings, 1 reply; 58+ messages in thread
From: Rusty Russell @ 2022-02-08 3:40 UTC (permalink / raw)
To: Russell O'Connor, Bitcoin Protocol Discussion,
Bitcoin Protocol Discussion
Russell O'Connor via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> writes:
> Given the overlap in functionality between CTV and ANYPREVOUT, I think it
> makes sense to decompose their operations into their constituent pieces and
> reassemble their behaviour programmatically. To this end, I'd like to
> instead propose OP_TXHASH and OP_CHECKSIGFROMSTACKVERIFY.
>
> OP_TXHASH would pop a txhash flag from the stack and compute a (tagged)
> txhash in accordance with that flag, and push the resulting hash onto the
> stack.
It may be worth noting that OP_TXHASH can be further decomposed into
OP_TX (and OP_TAGGEDHASH, or just reuse OP_SHA256).
OP_TX would place the concatenated selected fields onto the stack
(rather than hashing them) This is more compact for some tests
(e.g. testing tx version for 2 is "OP_TX(version) 1 OP_EQUALS" vs
"OP_TXHASH(version) 012345678...aabbccddeeff OP_EQUALS"), and also range
testing (e.g amount less than X or greater than X, or less than 3 inputs).
> I believe the difficulties with upgrading TXHASH can be mitigated by
> designing a robust set of TXHASH flags from the start. For example having
> bits to control whether (1) the version is covered; (2) the locktime is
> covered; (3) txids are covered; (4) sequence numbers are covered; (5) input
> amounts are covered; (6) input scriptpubkeys are covered; (7) number of
> inputs is covered; (8) output amounts are covered; (9) output scriptpubkeys
> are covered; (10) number of outputs is covered; (11) the tapbranch is
> covered; (12) the tapleaf is covered; (13) the opseparator value is
> covered; (14) whether all, one, or no inputs are covered; (15) whether all,
> one or no outputs are covered; (16) whether the one input position is
> covered; (17) whether the one output position is covered; (18) whether the
> sighash flags are covered or not (note: whether or not the sighash flags
> are or are not covered must itself be covered). Possibly specifying which
> input or output position is covered in the single case and whether the
> position is relative to the input's position or is an absolute position.
These easily map onto OP_TX, "(1) the version is pushed as u32, (2) the
locktime is pushed as u32, ...".
We might want to push SHA256() of scripts instead of scripts themselves,
to reduce possibility of DoS.
I suggest, also, that 14 (and similarly 15) be defined two bits:
00 - no inputs
01 - all inputs
10 - current input
11 - pop number from stack, fail if >= number of inputs or no stack elems.
Cheers,
Rusty.
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-08 3:40 ` Rusty Russell
@ 2022-02-08 4:34 ` Jeremy Rubin
2022-02-11 0:55 ` [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was " David A. Harding
2022-02-15 8:45 ` [bitcoin-dev] " Rusty Russell
0 siblings, 2 replies; 58+ messages in thread
From: Jeremy Rubin @ 2022-02-08 4:34 UTC (permalink / raw)
To: Rusty Russell, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 3148 bytes --]
Rusty,
Note that this sort of design introduces recursive covenants similarly to
how I described above.
Whether that is an issue or not precluding this sort of design or not, I
defer to others.
Best,
Jeremy
On Mon, Feb 7, 2022 at 7:57 PM Rusty Russell via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> Russell O'Connor via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org>
> writes:
> > Given the overlap in functionality between CTV and ANYPREVOUT, I think it
> > makes sense to decompose their operations into their constituent pieces
> and
> > reassemble their behaviour programmatically. To this end, I'd like to
> > instead propose OP_TXHASH and OP_CHECKSIGFROMSTACKVERIFY.
> >
> > OP_TXHASH would pop a txhash flag from the stack and compute a (tagged)
> > txhash in accordance with that flag, and push the resulting hash onto the
> > stack.
>
> It may be worth noting that OP_TXHASH can be further decomposed into
> OP_TX (and OP_TAGGEDHASH, or just reuse OP_SHA256).
>
> OP_TX would place the concatenated selected fields onto the stack
> (rather than hashing them) This is more compact for some tests
> (e.g. testing tx version for 2 is "OP_TX(version) 1 OP_EQUALS" vs
> "OP_TXHASH(version) 012345678...aabbccddeeff OP_EQUALS"), and also range
> testing (e.g amount less than X or greater than X, or less than 3 inputs).
>
> > I believe the difficulties with upgrading TXHASH can be mitigated by
> > designing a robust set of TXHASH flags from the start. For example
> having
> > bits to control whether (1) the version is covered; (2) the locktime is
> > covered; (3) txids are covered; (4) sequence numbers are covered; (5)
> input
> > amounts are covered; (6) input scriptpubkeys are covered; (7) number of
> > inputs is covered; (8) output amounts are covered; (9) output
> scriptpubkeys
> > are covered; (10) number of outputs is covered; (11) the tapbranch is
> > covered; (12) the tapleaf is covered; (13) the opseparator value is
> > covered; (14) whether all, one, or no inputs are covered; (15) whether
> all,
> > one or no outputs are covered; (16) whether the one input position is
> > covered; (17) whether the one output position is covered; (18) whether
> the
> > sighash flags are covered or not (note: whether or not the sighash flags
> > are or are not covered must itself be covered). Possibly specifying
> which
> > input or output position is covered in the single case and whether the
> > position is relative to the input's position or is an absolute position.
>
> These easily map onto OP_TX, "(1) the version is pushed as u32, (2) the
> locktime is pushed as u32, ...".
>
> We might want to push SHA256() of scripts instead of scripts themselves,
> to reduce possibility of DoS.
>
> I suggest, also, that 14 (and similarly 15) be defined two bits:
> 00 - no inputs
> 01 - all inputs
> 10 - current input
> 11 - pop number from stack, fail if >= number of inputs or no stack elems.
>
> Cheers,
> Rusty.
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 5173 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-08 4:34 ` Jeremy Rubin
@ 2022-02-11 0:55 ` David A. Harding
2022-02-11 3:42 ` Jeremy Rubin
` (3 more replies)
2022-02-15 8:45 ` [bitcoin-dev] " Rusty Russell
1 sibling, 4 replies; 58+ messages in thread
From: David A. Harding @ 2022-02-11 0:55 UTC (permalink / raw)
To: Jeremy Rubin, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 942 bytes --]
On Mon, Feb 07, 2022 at 08:34:30PM -0800, Jeremy Rubin via bitcoin-dev wrote:
> Whether [recursive covenants] is an issue or not precluding this sort
> of design or not, I defer to others.
For reference, I believe the last time the merits of allowing recursive
covenants was discussed at length on this list[1], not a single person
replied to say that they were opposed to the idea.
I would like to suggest that anyone opposed to recursive covenants speak
for themselves (if any intelligent such people exist). Citing the risk
of recursive covenants without presenting a credible argument for the
source of that risk feels to me like (at best) stop energy[2] and (at
worst) FUD.
-Dave
[1] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-July/019203.html
[2] http://radio-weblogs.com/0107584/stories/2002/05/05/stopEnergyByDaveWiner.html
(thanks to AJ who told me about stop energy one time when I was
producing it)
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-11 0:55 ` [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was " David A. Harding
@ 2022-02-11 3:42 ` Jeremy Rubin
2022-02-11 17:42 ` James O'Beirne
` (2 subsequent siblings)
3 siblings, 0 replies; 58+ messages in thread
From: Jeremy Rubin @ 2022-02-11 3:42 UTC (permalink / raw)
To: David A. Harding; +Cc: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 2176 bytes --]
I don't have a specific response to share at this moment, but I may make
one later.
But for the sake of elevating the discourse, I'd encourage people
responding this to read through
https://rubin.io/bitcoin/2021/12/04/advent-7/ as I think it has some
helpful terminology and categorizations.
I bring this up because I think that recursion is often given as a
shorthand for "powerful" because the types of operations that support
recursion typically also introduce open ended covenants, unless they are
designed specially not to. As a trivial example a covenant that makes a
coin spendable from itself to itself entirely with no authorization is
recursive but fully enumerated in a sense and not particularly interesting
or useful.
Therefore when responding you might be careful to distinguish if it is just
recursion which you take issue with or open ended or some combination of
properties which severally might be acceptable.
TL;DR there are different properties people might care about that get
lumped in with recursion, it's good to be explicit if it is a recursion
issue or something else.
Cheers,
Jeremy
On Thu, Feb 10, 2022, 4:55 PM David A. Harding <dave@dtrt.org> wrote:
> On Mon, Feb 07, 2022 at 08:34:30PM -0800, Jeremy Rubin via bitcoin-dev
> wrote:
> > Whether [recursive covenants] is an issue or not precluding this sort
> > of design or not, I defer to others.
>
> For reference, I believe the last time the merits of allowing recursive
> covenants was discussed at length on this list[1], not a single person
> replied to say that they were opposed to the idea.
>
> I would like to suggest that anyone opposed to recursive covenants speak
> for themselves (if any intelligent such people exist). Citing the risk
> of recursive covenants without presenting a credible argument for the
> source of that risk feels to me like (at best) stop energy[2] and (at
> worst) FUD.
>
> -Dave
>
> [1]
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-July/019203.html
> [2]
> http://radio-weblogs.com/0107584/stories/2002/05/05/stopEnergyByDaveWiner.html
> (thanks to AJ who told me about stop energy one time when I was
> producing it)
>
>
[-- Attachment #2: Type: text/html, Size: 3159 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-11 0:55 ` [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was " David A. Harding
2022-02-11 3:42 ` Jeremy Rubin
@ 2022-02-11 17:42 ` James O'Beirne
2022-02-11 18:12 ` digital vagabond
2022-02-18 7:34 ` ZmnSCPxj
2022-02-23 11:28 ` ZmnSCPxj
3 siblings, 1 reply; 58+ messages in thread
From: James O'Beirne @ 2022-02-11 17:42 UTC (permalink / raw)
To: David A. Harding, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 3169 bytes --]
I don't oppose recursive covenants per se, but in prior posts I have
expressed uncertainty about proposals that enable more "featureful"
covenants by adding more kinds of computation into bitcoin script.
Not that anyone here is necessarily saying otherwise, but I am very
interested in limiting operations in bitcoin script to "verification" (vs.
"computation") to the extent practical, and instead encouraging general
computation be done off-chain. This of course isn't a new observation and I
think the last few years have been very successful to that effect, e.g. the
popularity of the "scriptless scripts" idea and Taproot's emphasis on
embedding computational artifacts in key tweaks.
My (maybe unfounded?) worry about opcodes like OP_CAT and OP_TX is that
more logic will live in script than is necessary, and so the burden to
verify the chain may grow and the extra "degrees of freedom" in script may
make it harder to reason about. But I guess at this point there aren't
alternative means to construct new kinds of sighashes that are necessary
for some interesting covenants.
One thing I like about CTV is that it buys a lot of functionality without
increasing the "surface area" of script's design. In general I think there
is a lot to be said for this "jets"-style approach[0] of codifying the
script operations that you'd actually want to do into single opcodes. This
adds functionality while introducing minimal surface area to script, giving
script implementers more flexibility for, say, optimization. But of course
this comes at the cost of precluding experimentation, and probably
requiring more soft-forking. Though whether the place for script
experimentation using more general-purpose opcodes on the main chain is
another interesting debate...
Sorry for going a little off-topic there.
[0]: https://medium.com/blockstream/simplicity-jets-release-803db10fd589
On Thu, Feb 10, 2022 at 7:55 PM David A. Harding via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> On Mon, Feb 07, 2022 at 08:34:30PM -0800, Jeremy Rubin via bitcoin-dev
> wrote:
> > Whether [recursive covenants] is an issue or not precluding this sort
> > of design or not, I defer to others.
>
> For reference, I believe the last time the merits of allowing recursive
> covenants was discussed at length on this list[1], not a single person
> replied to say that they were opposed to the idea.
>
> I would like to suggest that anyone opposed to recursive covenants speak
> for themselves (if any intelligent such people exist). Citing the risk
> of recursive covenants without presenting a credible argument for the
> source of that risk feels to me like (at best) stop energy[2] and (at
> worst) FUD.
>
> -Dave
>
> [1]
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-July/019203.html
> [2]
> http://radio-weblogs.com/0107584/stories/2002/05/05/stopEnergyByDaveWiner.html
> (thanks to AJ who told me about stop energy one time when I was
> producing it)
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 4303 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-11 17:42 ` James O'Beirne
@ 2022-02-11 18:12 ` digital vagabond
2022-02-12 10:54 ` darosior
2022-02-17 15:15 ` Anthony Towns
0 siblings, 2 replies; 58+ messages in thread
From: digital vagabond @ 2022-02-11 18:12 UTC (permalink / raw)
To: James O'Beirne, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 7010 bytes --]
This is Shinobi (can verify out of band at @brian_trollz on Twitter, I only
signed up to the list with this email to read initially, but feel like I
should reply to this as I think I am one of the only people in this space
who has voiced concerns with recursive covenants).
My concerns don't really center specifically around recursion itself
necessarily, but unbounded recursion in combination with too much
generality/flexibility in what types of conditions future UTXOs can be
encumbered with based on the restriction of such covenants. Forgive the
hand waiving arguments without getting into specific opcodes, but I would
summarize my concerns with a hypothetical construct that I believe would be
incredibly damaging to fungibility. Imagine a covenant design that was
flexible enough to create an encumbrance like this: a script specifies a
specific key in a multisig controlled by some authority figure (or a branch
in the script that would allow unilateral control by such an authority),
and the conditions of the covenant would perpetually require than any spend
from the covenant can only be sent to a script involving that key from said
authority, preventing by consensus any removal of that central authorities
involvement in control over that UTXO. Such a construct would present
dangerous implications to the fungibility of individual UTXOs by
introducing a totally different risk model in being paid with such a coin
compared to any other coin not encumbered by such a condition, and also
potentially introduce a shift in the scope of what a 51% attack could
accomplish in terms of permanent consequences attempting to coerce coins
into such covenants, as opposed to right now only being able to accomplish
censorship or temporary network disruption.
I know that such a walled garden could easily be constructed now with
multisig and restrictions on where coins can be withdrawn to from exchanges
or whatever place they initially purchased from, as is demonstrated by the
implementation of the Asset Management Platform by Blockstream for use on
Liquid with regulated equity tokens, but I think the important distinction
between such non-consensus system designed to enforce such restrictions and
a recursive covenant to accomplish the same is that in the case of a
multisig/non-consensus based system, exit from that restriction is still
possible under the consensus rules of the protocol. If such a construct was
possible to build with a recursive covenant enforced by consensus, coins
encumbered by such a covenant would literally be incapable of escaping
those restrictions without hardforking the protocol, leaving any such UTXOs
permanently non-fungible with ones not encumbered by such conditions.
I'm not that deeply familiar with all the working pieces involved in the
recent TXHASH + CSFS proposal, and whether such a type of overly (IMO)
generalized recursion would be possible to construct, but one of the
reasons CTV does not bother me in terms of such concerns is the inability
to infinitely recurse in such a generalized way given the requirements to
exactly specify the destination of future spends in constructing a chain of
CTV encumbrances. I'd very much appreciate any feedback on my concerns, and
if this side tracks the discussion I apologize, but I felt given the issue
has been mentioned a few times in this thread it was appropriate for me to
voice the concerns here so they could be addressed directly.
On Fri, Feb 11, 2022 at 11:42 AM James O'Beirne via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> I don't oppose recursive covenants per se, but in prior posts I have
> expressed uncertainty about proposals that enable more "featureful"
> covenants by adding more kinds of computation into bitcoin script.
>
> Not that anyone here is necessarily saying otherwise, but I am very
> interested in limiting operations in bitcoin script to "verification" (vs.
> "computation") to the extent practical, and instead encouraging general
> computation be done off-chain. This of course isn't a new observation and I
> think the last few years have been very successful to that effect, e.g. the
> popularity of the "scriptless scripts" idea and Taproot's emphasis on
> embedding computational artifacts in key tweaks.
>
> My (maybe unfounded?) worry about opcodes like OP_CAT and OP_TX is that
> more logic will live in script than is necessary, and so the burden to
> verify the chain may grow and the extra "degrees of freedom" in script may
> make it harder to reason about. But I guess at this point there aren't
> alternative means to construct new kinds of sighashes that are necessary
> for some interesting covenants.
>
> One thing I like about CTV is that it buys a lot of functionality without
> increasing the "surface area" of script's design. In general I think there
> is a lot to be said for this "jets"-style approach[0] of codifying the
> script operations that you'd actually want to do into single opcodes. This
> adds functionality while introducing minimal surface area to script, giving
> script implementers more flexibility for, say, optimization. But of course
> this comes at the cost of precluding experimentation, and probably
> requiring more soft-forking. Though whether the place for script
> experimentation using more general-purpose opcodes on the main chain is
> another interesting debate...
>
> Sorry for going a little off-topic there.
>
> [0]: https://medium.com/blockstream/simplicity-jets-release-803db10fd589
>
>
> On Thu, Feb 10, 2022 at 7:55 PM David A. Harding via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> On Mon, Feb 07, 2022 at 08:34:30PM -0800, Jeremy Rubin via bitcoin-dev
>> wrote:
>> > Whether [recursive covenants] is an issue or not precluding this sort
>> > of design or not, I defer to others.
>>
>> For reference, I believe the last time the merits of allowing recursive
>> covenants was discussed at length on this list[1], not a single person
>> replied to say that they were opposed to the idea.
>>
>> I would like to suggest that anyone opposed to recursive covenants speak
>> for themselves (if any intelligent such people exist). Citing the risk
>> of recursive covenants without presenting a credible argument for the
>> source of that risk feels to me like (at best) stop energy[2] and (at
>> worst) FUD.
>>
>> -Dave
>>
>> [1]
>> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-July/019203.html
>> [2]
>> http://radio-weblogs.com/0107584/stories/2002/05/05/stopEnergyByDaveWiner.html
>> (thanks to AJ who told me about stop energy one time when I was
>> producing it)
>>
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 8671 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-11 18:12 ` digital vagabond
@ 2022-02-12 10:54 ` darosior
2022-02-12 15:59 ` Billy Tetrud
2022-02-17 15:15 ` Anthony Towns
1 sibling, 1 reply; 58+ messages in thread
From: darosior @ 2022-02-12 10:54 UTC (permalink / raw)
To: digital vagabond, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 7922 bytes --]
> Such a construct would present dangerous implications to the fungibility of individual UTXOs by introducing a totally different risk model in being paid with such a coin compared to any other coin not encumbered by such a condition
How is that different from being paid in an altcoin?
It seems to me that being able to say "sorry, your money isn't good here" is at the heart of Bitcoin's security (similarly to enforcing the network rules with your node). If someone can coerce you into using another currency, you've already lost.
Now there is left the influence on the system of an user being coerced into using gov coin (on another chain) or an encumbered bit coin. Sure the latter would decrease the supply available, but that's already possible to do today.
------- Original Message -------
Le vendredi 11 février 2022 à 7:12 PM, digital vagabond via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> a écrit :
> This is Shinobi (can verify out of band at @brian_trollz on Twitter, I only signed up to the list with this email to read initially, but feel like I should reply to this as I think I am one of the only people in this space who has voiced concerns with recursive covenants).
>
> My concerns don't really center specifically around recursion itself necessarily, but unbounded recursion in combination with too much generality/flexibility in what types of conditions future UTXOs can be encumbered with based on the restriction of such covenants. Forgive the hand waiving arguments without getting into specific opcodes, but I would summarize my concerns with a hypothetical construct that I believe would be incredibly damaging to fungibility. Imagine a covenant design that was flexible enough to create an encumbrance like this: a script specifies a specific key in a multisig controlled by some authority figure (or a branch in the script that would allow unilateral control by such an authority), and the conditions of the covenant would perpetually require than any spend from the covenant can only be sent to a script involving that key from said authority, preventing by consensus any removal of that central authorities involvement in control over that UTXO. Such a construct would present dangerous implications to the fungibility of individual UTXOs by introducing a totally different risk model in being paid with such a coin compared to any other coin not encumbered by such a condition, and also potentially introduce a shift in the scope of what a 51% attack could accomplish in terms of permanent consequences attempting to coerce coins into such covenants, as opposed to right now only being able to accomplish censorship or temporary network disruption.
>
> I know that such a walled garden could easily be constructed now with multisig and restrictions on where coins can be withdrawn to from exchanges or whatever place they initially purchased from, as is demonstrated by the implementation of the Asset Management Platform by Blockstream for use on Liquid with regulated equity tokens, but I think the important distinction between such non-consensus system designed to enforce such restrictions and a recursive covenant to accomplish the same is that in the case of a multisig/non-consensus based system, exit from that restriction is still possible under the consensus rules of the protocol. If such a construct was possible to build with a recursive covenant enforced by consensus, coins encumbered by such a covenant would literally be incapable of escaping those restrictions without hardforking the protocol, leaving any such UTXOs permanently non-fungible with ones not encumbered by such conditions.
>
> I'm not that deeply familiar with all the working pieces involved in the recent TXHASH + CSFS proposal, and whether such a type of overly (IMO) generalized recursion would be possible to construct, but one of the reasons CTV does not bother me in terms of such concerns is the inability to infinitely recurse in such a generalized way given the requirements to exactly specify the destination of future spends in constructing a chain of CTV encumbrances. I'd very much appreciate any feedback on my concerns, and if this side tracks the discussion I apologize, but I felt given the issue has been mentioned a few times in this thread it was appropriate for me to voice the concerns here so they could be addressed directly.
>
> On Fri, Feb 11, 2022 at 11:42 AM James O'Beirne via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> I don't oppose recursive covenants per se, but in prior posts I have expressed uncertainty about proposals that enable more "featureful" covenants by adding more kinds of computation into bitcoin script.
>>
>> Not that anyone here is necessarily saying otherwise, but I am very interested in limiting operations in bitcoin script to "verification" (vs. "computation") to the extent practical, and instead encouraging general computation be done off-chain. This of course isn't a new observation and I think the last few years have been very successful to that effect, e.g. the popularity of the "scriptless scripts" idea and Taproot's emphasis on embedding computational artifacts in key tweaks.
>>
>> My (maybe unfounded?) worry about opcodes like OP_CAT and OP_TX is that more logic will live in script than is necessary, and so the burden to verify the chain may grow and the extra "degrees of freedom" in script may make it harder to reason about. But I guess at this point there aren't alternative means to construct new kinds of sighashes that are necessary for some interesting covenants.
>>
>> One thing I like about CTV is that it buys a lot of functionality without increasing the "surface area" of script's design. In general I think there is a lot to be said for this "jets"-style approach[0] of codifying the script operations that you'd actually want to do into single opcodes. This adds functionality while introducing minimal surface area to script, giving script implementers more flexibility for, say, optimization. But of course this comes at the cost of precluding experimentation, and probably requiring more soft-forking. Though whether the place for script experimentation using more general-purpose opcodes on the main chain is another interesting debate...
>>
>> Sorry for going a little off-topic there.
>>
>> [0]: https://medium.com/blockstream/simplicity-jets-release-803db10fd589
>>
>> On Thu, Feb 10, 2022 at 7:55 PM David A. Harding via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>>> On Mon, Feb 07, 2022 at 08:34:30PM -0800, Jeremy Rubin via bitcoin-dev wrote:
>>>> Whether [recursive covenants] is an issue or not precluding this sort
>>>> of design or not, I defer to others.
>>>
>>> For reference, I believe the last time the merits of allowing recursive
>>> covenants was discussed at length on this list[1], not a single person
>>> replied to say that they were opposed to the idea.
>>>
>>> I would like to suggest that anyone opposed to recursive covenants speak
>>> for themselves (if any intelligent such people exist). Citing the risk
>>> of recursive covenants without presenting a credible argument for the
>>> source of that risk feels to me like (at best) stop energy[2] and (at
>>> worst) FUD.
>>>
>>> -Dave
>>>
>>> [1] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-July/019203.html
>>> [2] http://radio-weblogs.com/0107584/stories/2002/05/05/stopEnergyByDaveWiner.html
>>> (thanks to AJ who told me about stop energy one time when I was
>>> producing it)
>>>
>>> _______________________________________________
>>> bitcoin-dev mailing list
>>> bitcoin-dev@lists.linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[-- Attachment #2: Type: text/html, Size: 10590 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-12 10:54 ` darosior
@ 2022-02-12 15:59 ` Billy Tetrud
0 siblings, 0 replies; 58+ messages in thread
From: Billy Tetrud @ 2022-02-12 15:59 UTC (permalink / raw)
To: darosior, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 9329 bytes --]
> in the case of a multisig/non-consensus based system, exit from that
restriction is still possible
But why do we care if someone reduces the value of coins they own by
permanently encumbering them in some way? Burning coins permanently
encumbers them so much they can't be spent at all. If the worry is
depleting the supply of sats, don't worry, the amount of value lost by
those encumbered is gained but the rest of the coins. Just like burning,
encumbering your coins in a way that devalues them is a donation to the
rest of us.
Could you clarify what harm there is to those who choose not to accept such
encumbered coins? Or are you just saying that those who do accept such
encumbered coins may be harmed by doing so?
On Sat, Feb 12, 2022, 06:11 darosior via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> Such a construct would present dangerous implications to the fungibility
> of individual UTXOs by introducing a totally different risk model in being
> paid with such a coin compared to any other coin not encumbered by such a
> condition
>
>
> How is that different from being paid in an altcoin?
> It seems to me that being able to say "sorry, your money isn't good here"
> is at the heart of Bitcoin's security (similarly to enforcing the network
> rules with your node). If someone can coerce you into using another
> currency, you've already lost.
>
> Now there is left the influence on the system of an user being coerced
> into using gov coin (on another chain) or an encumbered bit coin. Sure the
> latter would decrease the supply available, but that's already possible to
> do today.
>
> ------- Original Message -------
> Le vendredi 11 février 2022 à 7:12 PM, digital vagabond via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> a écrit :
>
> This is Shinobi (can verify out of band at @brian_trollz on Twitter, I
> only signed up to the list with this email to read initially, but feel like
> I should reply to this as I think I am one of the only people in this space
> who has voiced concerns with recursive covenants).
>
> My concerns don't really center specifically around recursion itself
> necessarily, but unbounded recursion in combination with too much
> generality/flexibility in what types of conditions future UTXOs can be
> encumbered with based on the restriction of such covenants. Forgive the
> hand waiving arguments without getting into specific opcodes, but I would
> summarize my concerns with a hypothetical construct that I believe would be
> incredibly damaging to fungibility. Imagine a covenant design that was
> flexible enough to create an encumbrance like this: a script specifies a
> specific key in a multisig controlled by some authority figure (or a branch
> in the script that would allow unilateral control by such an authority),
> and the conditions of the covenant would perpetually require than any spend
> from the covenant can only be sent to a script involving that key from said
> authority, preventing by consensus any removal of that central authorities
> involvement in control over that UTXO. Such a construct would present
> dangerous implications to the fungibility of individual UTXOs by
> introducing a totally different risk model in being paid with such a coin
> compared to any other coin not encumbered by such a condition, and also
> potentially introduce a shift in the scope of what a 51% attack could
> accomplish in terms of permanent consequences attempting to coerce coins
> into such covenants, as opposed to right now only being able to accomplish
> censorship or temporary network disruption.
>
> I know that such a walled garden could easily be constructed now with
> multisig and restrictions on where coins can be withdrawn to from exchanges
> or whatever place they initially purchased from, as is demonstrated by the
> implementation of the Asset Management Platform by Blockstream for use on
> Liquid with regulated equity tokens, but I think the important distinction
> between such non-consensus system designed to enforce such restrictions and
> a recursive covenant to accomplish the same is that in the case of a
> multisig/non-consensus based system, exit from that restriction is still
> possible under the consensus rules of the protocol. If such a construct was
> possible to build with a recursive covenant enforced by consensus, coins
> encumbered by such a covenant would literally be incapable of escaping
> those restrictions without hardforking the protocol, leaving any such UTXOs
> permanently non-fungible with ones not encumbered by such conditions.
>
> I'm not that deeply familiar with all the working pieces involved in the
> recent TXHASH + CSFS proposal, and whether such a type of overly (IMO)
> generalized recursion would be possible to construct, but one of the
> reasons CTV does not bother me in terms of such concerns is the inability
> to infinitely recurse in such a generalized way given the requirements to
> exactly specify the destination of future spends in constructing a chain of
> CTV encumbrances. I'd very much appreciate any feedback on my concerns, and
> if this side tracks the discussion I apologize, but I felt given the issue
> has been mentioned a few times in this thread it was appropriate for me to
> voice the concerns here so they could be addressed directly.
>
> On Fri, Feb 11, 2022 at 11:42 AM James O'Beirne via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> I don't oppose recursive covenants per se, but in prior posts I have
>> expressed uncertainty about proposals that enable more "featureful"
>> covenants by adding more kinds of computation into bitcoin script.
>>
>> Not that anyone here is necessarily saying otherwise, but I am very
>> interested in limiting operations in bitcoin script to "verification" (vs.
>> "computation") to the extent practical, and instead encouraging general
>> computation be done off-chain. This of course isn't a new observation and I
>> think the last few years have been very successful to that effect, e.g. the
>> popularity of the "scriptless scripts" idea and Taproot's emphasis on
>> embedding computational artifacts in key tweaks.
>>
>> My (maybe unfounded?) worry about opcodes like OP_CAT and OP_TX is that
>> more logic will live in script than is necessary, and so the burden to
>> verify the chain may grow and the extra "degrees of freedom" in script may
>> make it harder to reason about. But I guess at this point there aren't
>> alternative means to construct new kinds of sighashes that are necessary
>> for some interesting covenants.
>>
>> One thing I like about CTV is that it buys a lot of functionality without
>> increasing the "surface area" of script's design. In general I think there
>> is a lot to be said for this "jets"-style approach[0] of codifying the
>> script operations that you'd actually want to do into single opcodes. This
>> adds functionality while introducing minimal surface area to script, giving
>> script implementers more flexibility for, say, optimization. But of course
>> this comes at the cost of precluding experimentation, and probably
>> requiring more soft-forking. Though whether the place for script
>> experimentation using more general-purpose opcodes on the main chain is
>> another interesting debate...
>>
>> Sorry for going a little off-topic there.
>>
>> [0]: https://medium.com/blockstream/simplicity-jets-release-803db10fd589
>>
>>
>> On Thu, Feb 10, 2022 at 7:55 PM David A. Harding via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>>> On Mon, Feb 07, 2022 at 08:34:30PM -0800, Jeremy Rubin via bitcoin-dev
>>> wrote:
>>> > Whether [recursive covenants] is an issue or not precluding this sort
>>> > of design or not, I defer to others.
>>>
>>> For reference, I believe the last time the merits of allowing recursive
>>> covenants was discussed at length on this list[1], not a single person
>>> replied to say that they were opposed to the idea.
>>>
>>> I would like to suggest that anyone opposed to recursive covenants speak
>>> for themselves (if any intelligent such people exist). Citing the risk
>>> of recursive covenants without presenting a credible argument for the
>>> source of that risk feels to me like (at best) stop energy[2] and (at
>>> worst) FUD.
>>>
>>> -Dave
>>>
>>> [1]
>>> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-July/019203.html
>>> [2]
>>> http://radio-weblogs.com/0107584/stories/2002/05/05/stopEnergyByDaveWiner.html
>>> (thanks to AJ who told me about stop energy one time when I was
>>> producing it)
>>>
>>> _______________________________________________
>>> bitcoin-dev mailing list
>>> bitcoin-dev@lists.linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>>
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 12827 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-08 4:34 ` Jeremy Rubin
2022-02-11 0:55 ` [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was " David A. Harding
@ 2022-02-15 8:45 ` Rusty Russell
2022-02-15 18:57 ` Jeremy Rubin
1 sibling, 1 reply; 58+ messages in thread
From: Rusty Russell @ 2022-02-15 8:45 UTC (permalink / raw)
To: Jeremy Rubin, Bitcoin Protocol Discussion
Jeremy Rubin <jeremy.l.rubin@gmail.com> writes:
> Rusty,
>
> Note that this sort of design introduces recursive covenants similarly to
> how I described above.
>
> Whether that is an issue or not precluding this sort of design or not, I
> defer to others.
Good point!
But I think it's a distinction without meaning: AFAICT iterative
covenants are possible with OP_CTV and just as powerful, though
technically finite. I can constrain the next 100M spends, for
example: if I insist on those each having incrementing nLocktime,
that's effectively forever.
Thanks!
Rusty.
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-15 8:45 ` [bitcoin-dev] " Rusty Russell
@ 2022-02-15 18:57 ` Jeremy Rubin
2022-02-15 19:12 ` Russell O'Connor
2022-02-16 2:26 ` Rusty Russell
0 siblings, 2 replies; 58+ messages in thread
From: Jeremy Rubin @ 2022-02-15 18:57 UTC (permalink / raw)
To: Rusty Russell; +Cc: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 3165 bytes --]
Hi Rusty,
Please see my post in the other email thread
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019886.html
The differences in this regard are several, and worth understanding beyond
"you can iterate CTV". I'd note a few clear examples for showing that "CTV
is just as powerful" is not a valid claim:
1) CTV requires the contract to be fully enumerated and is non-recursive.
For example, a simple contract that allows n participants to take an action
in any order requires factorially many pre-computations, not just linear or
constant. For reference, 24! is about 2**80. Whereas for a more
interpretive covenant -- which is often introduced with the features for
recursion -- you can compute the programs for these addresses in constant
time.
2) CTV requires the contract to be fully enumerated: For example, a simple
contract one could write is "Output 0 script matches Output 1", and the set
of outcomes is again unbounded a-priori. With CTV you need to know the set
of pairs you'd like to be able to expand to a-priori
3) Combining 1 and 2, you could imagine recursing on an open-ended thing
like creating many identical outputs over time but not constraining what
those outputs are. E.g., Output 0 matches Input 0, Output 1 matches Output
2.
I think for your point the inverse seems to hold: for the limited
situations we might want to set up, CTV often ends up being sufficient
because usually we can enumerate all the possible outcomes we'd like (or at
least find a mapping onto such a construction). CTV is indeed very
powerful, but as I demonstrated above, not powerful in the same way
("Complexity Class") that OP_TX or TXHASH might be.
At the very least we should clearly understand *what* and *why* we are
advocating for more sophisticated designs and have a thorough understanding
of the protocol complexity we are motivated to introduce the expanded
functionality. Further, if one advocates for TX/TXHASH on a featureful
basis, it's at least a technical ACK on the functionality CTV is
introducing (as it is a subset) and perhaps a disagreement on project
management, which I think is worth noting. There is a very wide gap between
"X is unsafe" and "I prefer Y which X is a subset of ''.
I'll close by repeating : Whether that [the recursive/open ended
properties] is an issue or not precluding this sort of design or not, I
defer to others.
Best,
Jeremy
Best,
Jeremy
--
@JeremyRubin <https://twitter.com/JeremyRubin>
On Tue, Feb 15, 2022 at 12:46 AM Rusty Russell <rusty@rustcorp.com.au>
wrote:
> Jeremy Rubin <jeremy.l.rubin@gmail.com> writes:
> > Rusty,
> >
> > Note that this sort of design introduces recursive covenants similarly to
> > how I described above.
> >
> > Whether that is an issue or not precluding this sort of design or not, I
> > defer to others.
>
> Good point!
>
> But I think it's a distinction without meaning: AFAICT iterative
> covenants are possible with OP_CTV and just as powerful, though
> technically finite. I can constrain the next 100M spends, for
> example: if I insist on those each having incrementing nLocktime,
> that's effectively forever.
>
> Thanks!
> Rusty.
>
[-- Attachment #2: Type: text/html, Size: 7029 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-15 18:57 ` Jeremy Rubin
@ 2022-02-15 19:12 ` Russell O'Connor
2022-02-16 2:26 ` Rusty Russell
1 sibling, 0 replies; 58+ messages in thread
From: Russell O'Connor @ 2022-02-15 19:12 UTC (permalink / raw)
To: Jeremy Rubin; +Cc: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 2752 bytes --]
On Tue, Feb 15, 2022 at 1:57 PM Jeremy Rubin <jeremy.l.rubin@gmail.com>
wrote:
> Hi Rusty,
>
> Please see my post in the other email thread
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019886.html
>
> The differences in this regard are several, and worth understanding beyond
> "you can iterate CTV". I'd note a few clear examples for showing that "CTV
> is just as powerful" is not a valid claim:
>
> 1) CTV requires the contract to be fully enumerated and is non-recursive.
> For example, a simple contract that allows n participants to take an action
> in any order requires factorially many pre-computations, not just linear or
> constant. For reference, 24! is about 2**80. Whereas for a more
> interpretive covenant -- which is often introduced with the features for
> recursion -- you can compute the programs for these addresses in constant
> time.
> 2) CTV requires the contract to be fully enumerated: For example, a simple
> contract one could write is "Output 0 script matches Output 1", and the set
> of outcomes is again unbounded a-priori. With CTV you need to know the set
> of pairs you'd like to be able to expand to a-priori
> 3) Combining 1 and 2, you could imagine recursing on an open-ended thing
> like creating many identical outputs over time but not constraining what
> those outputs are. E.g., Output 0 matches Input 0, Output 1 matches Output
> 2.
>
> I think for your point the inverse seems to hold: for the limited
> situations we might want to set up, CTV often ends up being sufficient
> because usually we can enumerate all the possible outcomes we'd like (or at
> least find a mapping onto such a construction). CTV is indeed very
> powerful, but as I demonstrated above, not powerful in the same way
> ("Complexity Class") that OP_TX or TXHASH might be.
>
Just to be clear, if OP_TXHASH is restricted to including the flags for the
values to be hashed (at least for OP_TXHASH0), we don't appear to enter
recursive covenant territory, as long as we remain without OP_CAT.
> At the very least we should clearly understand *what* and *why* we are
> advocating for more sophisticated designs and have a thorough understanding
> of the protocol complexity we are motivated to introduce the expanded
> functionality. Further, if one advocates for TX/TXHASH on a featureful
> basis, it's at least a technical ACK on the functionality CTV is
> introducing (as it is a subset) and perhaps a disagreement on project
> management, which I think is worth noting. There is a very wide gap between
> "X is unsafe" and "I prefer Y which X is a subset of ''.
>
I'm certainly of the opinion we should have some feature to enable the
commitment of outputs. It seems quite useful in various protocols.
[-- Attachment #2: Type: text/html, Size: 4618 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-15 18:57 ` Jeremy Rubin
2022-02-15 19:12 ` Russell O'Connor
@ 2022-02-16 2:26 ` Rusty Russell
2022-02-16 4:10 ` Russell O'Connor
1 sibling, 1 reply; 58+ messages in thread
From: Rusty Russell @ 2022-02-16 2:26 UTC (permalink / raw)
To: Jeremy Rubin; +Cc: Bitcoin Protocol Discussion
Jeremy Rubin <jeremy.l.rubin@gmail.com> writes:
> Hi Rusty,
>
> Please see my post in the other email thread
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019886.html
>
> The differences in this regard are several, and worth understanding beyond
> "you can iterate CTV". I'd note a few clear examples for showing that "CTV
> is just as powerful" is not a valid claim:
>
> 1) CTV requires the contract to be fully enumerated and is non-recursive.
> For example, a simple contract that allows n participants to take an action
> in any order requires factorially many pre-computations, not just linear or
> constant. For reference, 24! is about 2**80. Whereas for a more
> interpretive covenant -- which is often introduced with the features for
> recursion -- you can compute the programs for these addresses in constant
> time.
> 2) CTV requires the contract to be fully enumerated: For example, a simple
> contract one could write is "Output 0 script matches Output 1", and the set
> of outcomes is again unbounded a-priori. With CTV you need to know the set
> of pairs you'd like to be able to expand to a-priori
> 3) Combining 1 and 2, you could imagine recursing on an open-ended thing
> like creating many identical outputs over time but not constraining what
> those outputs are. E.g., Output 0 matches Input 0, Output 1 matches Output
> 2.
Oh agreed. It was distinction of "recursive" vs "not recursive" which
was less useful in this context.
"limited to complete enumeration" is the more useful distinction: it's a
bright line between CTV and TXHASH IMHO.
> I'll close by repeating : Whether that [the recursive/open ended
> properties] is an issue or not precluding this sort of design or not, I
> defer to others.
Yeah. There's been some feeling that complex scripting is bad, because
people can lose money (see the various attempts to defang
SIGHASH_NOINPUT). I reject that; since script exists, we've crossed the
Rubicon, so let's make the tools as clean and clear as we can.
Cheers!
Rusty.
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-16 2:26 ` Rusty Russell
@ 2022-02-16 4:10 ` Russell O'Connor
0 siblings, 0 replies; 58+ messages in thread
From: Russell O'Connor @ 2022-02-16 4:10 UTC (permalink / raw)
To: Rusty Russell; +Cc: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 1908 bytes --]
On Tue, Feb 15, 2022 at 10:45 PM Rusty Russell <rusty@rustcorp.com.au>
wrote:
> Jeremy Rubin <jeremy.l.rubin@gmail.com> writes:
> > Hi Rusty,
> >
> > Please see my post in the other email thread
> >
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019886.html
> >
> > The differences in this regard are several, and worth understanding
> beyond
> > "you can iterate CTV". I'd note a few clear examples for showing that
> "CTV
> > is just as powerful" is not a valid claim:
> >
> > 1) CTV requires the contract to be fully enumerated and is non-recursive.
> > For example, a simple contract that allows n participants to take an
> action
> > in any order requires factorially many pre-computations, not just linear
> or
> > constant. For reference, 24! is about 2**80. Whereas for a more
> > interpretive covenant -- which is often introduced with the features for
> > recursion -- you can compute the programs for these addresses in constant
> > time.
> > 2) CTV requires the contract to be fully enumerated: For example, a
> simple
> > contract one could write is "Output 0 script matches Output 1", and the
> set
> > of outcomes is again unbounded a-priori. With CTV you need to know the
> set
> > of pairs you'd like to be able to expand to a-priori
> > 3) Combining 1 and 2, you could imagine recursing on an open-ended thing
> > like creating many identical outputs over time but not constraining what
> > those outputs are. E.g., Output 0 matches Input 0, Output 1 matches
> Output
> > 2.
>
> Oh agreed. It was distinction of "recursive" vs "not recursive" which
> was less useful in this context.
>
> "limited to complete enumeration" is the more useful distinction: it's a
> bright line between CTV and TXHASH IMHO.
>
If TXHASH is limited to requiring the flags be included in the hash (as is
done with sighash) I believe TXHASH has the same "up front" nature that CTV
has.
[-- Attachment #2: Type: text/html, Size: 2640 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-08 2:16 ` Russell O'Connor
@ 2022-02-17 14:27 ` Anthony Towns
2022-02-17 14:50 ` Russell O'Connor
0 siblings, 1 reply; 58+ messages in thread
From: Anthony Towns @ 2022-02-17 14:27 UTC (permalink / raw)
To: Russell O'Connor, Bitcoin Protocol Discussion
On Mon, Feb 07, 2022 at 09:16:10PM -0500, Russell O'Connor via bitcoin-dev wrote:
> > > For more complex interactions, I was imagining combining this TXHASH
> > > proposal with CAT and/or rolling SHA256 opcodes.
> Indeed, and we really want something that can be programmed at redemption
> time.
I mean, ideally we'd want something that can be flexibly programmed at
redemption time, in a way that requires very few bytes to express the
common use cases, is very efficient to execute even if used maliciously,
is hard to misuse accidently, and can be cleanly upgraded via soft fork
in the future if needed?
That feels like it's probably got a "fast, cheap, good" paradox buried
in there, but even if it doesn't, it doesn't seem like something you
can really achieve by tweaking around the edges?
> That probably involves something like how the historic MULTISIG worked by
> having list of input / output indexes be passed in along with length
> arguments.
>
> I don't think there will be problems with quadratic hashing here because as
> more inputs are list, the witness in turns grows larger itself.
If you cache the hash of each input/output, it would mean each byte of
the witness would be hashing at most an extra 32 bytes of data pulled
from that cache, so I think you're right. Three bytes of "script" can
already cause you to rehash an additional ~500 bytes (DUP SHA256 DROP),
so that should be within the existing computation-vs-weight relationship.
If you add the ability to hash a chosen output (as Rusty suggests, and
which would allow you to simulate SIGHASH_GROUP), your probably have to
increase your cache to cover each outputs' scriptPubKey simultaneously,
which might be annoying, but doesn't seem fatal.
> That said, your SIGHASH_GROUP proposal suggests that some sort of
> intra-input communication is really needed, and that is something I would
> need to think about.
I think the way to look at it is that it trades off spending an extra
witness byte or three per output (your way, give or take) vs only being
able to combine transactions in limited ways (sighash_group), but being
able to be more optimised than the more manual approach.
That's a fine tradeoff to make for something that's common -- you
save onchain data, make something easier to use, and can optimise the
implementation so that it handles the common case more efficiently.
(That's a bit of a "premature optimisation" thing though -- we can't
currently do SIGHASH_GROUP style things, so how can you sensibly justify
optimising it because it's common, when it's not only currently not
common, but also not possible? That seems to me a convincing reason to
make script more expressive)
> While normally I'd be hesitant about this sort of feature creep, when we
> are talking about doing soft-forks, I really think it makes sense to think
> through these sorts of issues (as we are doing here).
+1
I guess I especially appreciate your goodwill here, because this has
sure turned out to be a pretty long message as I think some of these
things through out loud :)
> > "CAT" and "CHECKSIGFROMSTACK" are both things that have been available in
> > elements for a while; has anyone managed to build anything interesting
> > with them in practice, or are they only useful for thought experiments
> > and blog posts? To me, that suggests that while they're useful for
> > theoretical discussion, they don't turn out to be a good design in
> > practice.
> Perhaps the lesson to be drawn is that languages should support multiplying
> two numbers together.
Well, then you get to the question of whether that's enough, or if
you need to be able to multiply bignums together, etc?
I was looking at uniswap-like things on liquid, and wanted to do constant
product for multiple assets -- but you already get the problem that "x*y
< k" might overflow if the output values x and y are ~50 bits each, and
that gets worse with three assets and wanting to calculate "x*y*z < k",
etc. And really you'd rather calculate "a*log(x) + b*log(y) + c*log(z)
< k" instead, which then means implementing fixed point log in script...
> Having 2/3rd of the language you need to write interesting programs doesn't
> mean that you get 2/3rd of the interesting programs written.
I guess to abuse that analogy: I think you're saying something like
we've currently got 67% of an ideal programming language, and CTV
would give us 68%, but that would only take us from 10% to 11% of the
interesting programs. I agree txhash might bump that up to, say, 69%
(nice) but I'm not super convinced that even moves us from 11% to 12%
of interesting programs, let alone a qualitative leap to 50% or 70%
of interesting programs.
It's *possible* that the ideal combination of opcodes will turn out to
be CAT, TXHASH, CHECKSIGFROMSTACK, MUL64LE, etc, but it feels like it'd
be better working something out that fits together well, rather than
adding things piecemeal and hoping we don't spend all that effort to
end up in a local optimum that's a long way short of a global optimum?
[rearranged:]
> The flexibility of TXHASH is intended to head off the need for future soft
> forks. If we had specific applications in mind, we could simply set up the
> transaction hash flags to cover all the applications we know about. But it
> is the applications that we don't know about that worry me. If we don't
> put options in place with this soft-fork proposal, then they will need
> their own soft-fork down the line; and the next application after that, and
> so on.
>
> If our attitude is to craft our soft-forks as narrowly as possible to limit
> them to what only allows for given tasks, then we are going to end up
> needing a lot more soft-forks, and that is not a good outcome.
I guess I'm not super convinced that we're anywhere near the right level
of generality that this would help in avoiding future soft forks? That's
what I meant by it not covering SIGHASH_GROUP.
I guess the model I have in my head, is that what we should ideally
have a general/flexible/expressive but expensive way of doing whatever
scripting you like (so a "SIMPLICITY_EXEC" opcode, perhaps), but then,
as new ideas get discovered and widely deployed, we should then make them
easy and cheap to use (whether that's deploying a "jet" for the simplicity
code, or a dedicated opcode, or something else), but "cheap to use"
means defining a new cost function (or defining new execution conditions
for something that was already cheaper than the cheapest existing way
of encoding those execution conditions), which is itself a soft fork
since to make it "cheaper" means being able to fit more transactions
using that feature into a block than was previously possible..
But even then, based on [0], pure simplicity code to verify a signature
apparently takes 11 minutes, so that code probably should cost 66M vbytes
(based on a max time to verify a block of 10 seconds), which would
make it obviously unusable as a bitcoin tx with their normal 100k vbyte
limit... Presumably an initial simplicity deployment would come with a
bunch of jets baked in so that's less of an issue in practice...
But I think that means that even with simplicity you couldn't experiment
with alternative ECC curves or zero knowledge stuff without a soft fork
to make the specific setup fast and cheap, first.
[0] https://medium.com/blockstream/simplicity-jets-release-803db10fd589
(I think this approach would already be an improvement in how we do soft
forks, though: (1) for many things, you would already have on-chain
evidence that this is something that's worthwhile, because people are
paying high fees to do it via hand-coded simplicity, so there's no
question of whether it will be used; (2) you can prove the jet and the
simplicity code do the exact same thing (and have unit/fuzz tests to
verify it), so can be more confident that the implementation is correct;
(3) maybe it's easier to describe in a bip that way too, since you can
just reference the simplicity code it's replacing rather than having
C++ code?)
That still probably doesn't cover every experiment you might want to do;
eg if you wanted to have your tx commit to a prior block hash, you'd
presumably need a soft fork to expose that data; and if you wanted to
extend the information about the utxo being spent (eg a parity bit for
the internal public key to make recursive TLUV work better) you'd need a
soft fork for that too.
I guess a purist approach to generalising sighashes might look something
like:
[s] [shimplicity] DUP EXEC p CHECKSIGFROMSTACK
where both s and shimplicity (== sighash + simplicity or shim + simplicity
:) are provided by the signer, with s being a signature, and shimplicity
being a simplicity script that builds a 32 byte message based on whatever
bits of the transaction it chooses as well as the shimplicity script
itself to prevent malleability.
But writing a shimplicity script all the time is annoying, so adding an
extra opcode to avoid that makes sense, reducing it to:
[s] [sh] TXHASH p CHECKIGFROMSTACK
which is then equivalent to the exisitng
[s|sh] p CHECKSIG
Though in that case, wouldn't you just have "txhash(sh)" be your
shimplicity script (in which case txhash is a jet rather than an opcode),
and keep the program as "DUP EXEC p CHECKSIGFROMSTACK", which then gives
the signer maximum flexibility to either use a standard sighash, or
write special code to do something new and magic?
So I think I'm 100% convinced that a (simplified) TXHASH makes sense in
a world where we have simplicity-equivalent scripting (and where there's
*also* some more direct introspection functionality like Rusty's OP_TX
or elements' tapscript opcodes or whatever).
(I don't think there's much advantage of a TaggedHash opcode that
takes the tag as a parameter over just writing "SHA256 DUP CAT SWAP CAT
SHA256", and if you were going to have a "HASH_TapSighash" opcode it
probably should be limited to hashing the same things from the bip that
defines it anyway. So having two simplicity functions, one for bip340
(checksigfromstack) and one for bip342 (generating a signature message
for the current transaction) seems about ideal)
But, I guess that brings me back to more or less what Jeremy asked
earlier in this thread:
] Does it make "more sense" to invest the research and development effort
] that would go into proving TXHASH safe, for example, into Simplicity
] instead?
Should we be trying to gradually turn script into a more flexible
language, one opcode at a time -- going from 11% to 12% to 13.4% to
14.1% etc of coverage of interesting programs -- or should we invest
that time/effort into working on simplicity (or something like chialisp
or similar) instead? That is, something where we could actually evaluate
how all the improved pieces fit together rather than guessing how it might
work if we maybe in future add CAT or 64 bit maths or something else...
If we put all our "language design" efforts into simplicity/whatever,
we could leave script as more of a "macro" language than a programming
one; that is, focus on it being an easy, cheap, safe way of doing the
most common things. I think that would still be worthwhile, both before
and after simplicity/* is available?
I think my opinions are:
* recursive covenants are not a problem; avoiding them isn't and
shouldn't be a design goal; and trying to prevent other people using
them is wasted effort
* having a language redesign is worthwhile -- there are plenty of ways
to improve script, and there's enough other blockchain languages out
there by now that we ought be able to avoid a "second system effect"
disaster
* CTV via legacy script saves ~17 vbytes compared to going via
tapscript (since the CTV hash is already in the scriptPubKey and the
internal pubkey isn't needed, so neither need to be revealed to spend)
and avoids the taproot ECC equation check, at the cost of using up
an OP_NOP opcode. That seems worthwhile to me. Comparatively, TXHASH
saves ~8 vbytes compared to emulating it with CTV (because you don't
have to supply an unacceptable hash on demand). So having both may be
worthwhile, but if we only have one, CTV seems the bigger saving? And
we're not wasting an opcode if we do CTV now and add TXHASH later,
since we TXHASH isn't NOP-compatible and can't be included in legacy
script anyway.
* TXHASH's "PUSH" behaviour vs CTV's "examine the stack but don't
change it, and VERIFY" behaviour is independent of the question of
if we want to supply flags to CTV/TXHASH so they're more flexible
And perhaps less strongly:
* I don't like the ~18 TXHASH flags; for signing/CTV behaviour, they're
both overkill (they have lots of seemingly useless combinations)
and insufficient (don't cover SIGHASH_GROUP), and they add additional
bytes of witness data, compared to CTV's zero-byte default or CHECKSIG's
zero/one-byte sighash which only do things we know are useful (well,
some of those combinations might not be useful either...).
* If we're deliberately trying to add transaction introspection, then
all the flags do make sense, but Rusty's unhashed "TX" approach seems
better than TXHASH for that (assuming we want one opcode versus the
many opcodes elements use). But if we want that, we probably should
also add maths opcodes that can cope with output amounts, at least;
and perhaps also have some way for signatures to some witness data
that's used as script input. Also, convenient introspection isn't
really compatible with convenient signing without some way of
conveniently converting data into a tagged hash.
* I'm not really convinced CTV is ready to start trying to deploy
on mainnet even in the next six months; I'd much rather see some real
third-party experimentation *somewhere* public first, and Jeremy's CTV
signet being completely empty seems like a bad sign to me. Maybe that
means we should tentatively merge the feature and deploy it on the
default global signet though? Not really sure how best to get more
real world testing; but "deploy first, test later" doesn't sit right.
I'm not at all sure about bundling CTV with ANYPREVOUT and SIGHASH_GROUP:
Pros:
- with APO available, you don't have to worry as much if spending
a CTV output doesn't result in a guaranteed txid, and thus don't need
to commit to scriptSigs and they like
- APOAS and CTV are pretty similar in what they hash
- SIGHASH_GROUP lets you add extra extra change outputs to a CTV spend
which you can't otherwise do
- reusing APOAS's tx hash scheme for CTV would avoid some of the weird
ugly bits in CTV (that the input index is committed to and that the
scriptSig is only "maybe!" included)
- defining SIGHASH_GROUP and CTV simultaneously might let you define
the groups in a way that is compatible between tapscript (annex-based)
and legacy CTV. On the other hand, this probably still works provided
you deploy SIGHASH_GROUP /after/ CTV is specced in (by defining CTV
behaviour for a different length arg)
Cons:
- just APOAS|ALL doesn't quite commit to the same things as bip 119 CTV
and that matters if you reuse CTV addresses
- SIGHASH_GROUP assumes use of the annex, which would need to be
specced out; SIGHASH_GROUP itself doesn't really have a spec yet either
- txs signed by APOAS|GROUP are more malleable than txs with a bip119
CTV hash which might be annoying to handle even non-adversarially
- that malleability with current RBF rules might lead to pinning
problems
I guess for me that adds up to:
* For now, I think I prefer OP_CTV over either OP_TXHASH alone or both
OP_CTV and OP_TXHASH
* I'd like to see CTV get more real-world testing before considering
deployment
* If APO/SIGHASH_GROUP get specced, implemented *and* tested by the
time CTV is tested enough to think about deploying it, bundle them
* Unless CTV testing takes ages, it's pretty unlikely it'll be worth
simplifying CTV to more closely match APO's tx hashing
* CAT, CHECKSIGFROMSTACK, tx introspection, better maths *are* worth
prioritising, but would be better as part of a more thorough language
overhaul (since you can analyse how they interact with each other
in combination, and you get a huge jump from ~10% to ~80% benefit,
instead of tiny incremental ones)?
I guess that's all partly dependent on thinking that, TXHASH isn't
great for tx introspection (especially without CAT) and, (without tx
introspection and decent math opcodes), DLCs already provide all the
interesting oracle behaviour you're really going to get...
> I don't know if this is the answer you are looking for, but technically
> TXHASH + CAT + SHA256 awkwardly gives you limited transaction reflection.
> In fact, you might not even need TXHASH, though it certainly helps.
Yeah, it wasn't really what I was looking for but it does demolish that
specific thought experiment anyway.
> > I believe "sequences hash", "input count" and "input index" are all an
> > important part of ensuring that if you have two UTXOs distributing 0.42
> > BTC to the same set of addresses via CTV, that you can't combine them in a
> > single transaction and end up sending losing one of the UTXOs to fees. I
> > don't believe there's a way to resolve that with bip 118 alone, however
> > that does seem to be a similar problem to the one that SIGHASH_GROUP
> > tries to solve.
> It was my understanding that it is only "input count = 1" that prevents
> this issue.
If you have input count = 1, that solves the issue, but you could also
have input count > 1, and simply commit to different input indexes to
allow/require you to combine two CTV utxos into a common set of new
outputs, or you could have input count > 1 but input index = 1 for both
utxos to prevent combining them with each other, but allow adding a fee
funding input (but not a change output; and at a cost of an unpredictable
txid).
(I only listed "sequences hash" there because it implicitly commits to
"input count")
Cheers,
aj
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-17 14:27 ` Anthony Towns
@ 2022-02-17 14:50 ` Russell O'Connor
0 siblings, 0 replies; 58+ messages in thread
From: Russell O'Connor @ 2022-02-17 14:50 UTC (permalink / raw)
To: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 405 bytes --]
On Thu, Feb 17, 2022 at 9:27 AM Anthony Towns <aj@erisian.com.au> wrote:
>
> I guess that's all partly dependent on thinking that, TXHASH isn't
> great for tx introspection (especially without CAT) and, (without tx
> introspection and decent math opcodes), DLCs already provide all the
> interesting oracle behaviour you're really going to get...
>
You left out CSFSV's ability to do pubkey delegation.
[-- Attachment #2: Type: text/html, Size: 738 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-11 18:12 ` digital vagabond
2022-02-12 10:54 ` darosior
@ 2022-02-17 15:15 ` Anthony Towns
1 sibling, 0 replies; 58+ messages in thread
From: Anthony Towns @ 2022-02-17 15:15 UTC (permalink / raw)
To: digital vagabond, Bitcoin Protocol Discussion
On Fri, Feb 11, 2022 at 12:12:28PM -0600, digital vagabond via bitcoin-dev wrote:
> Imagine a covenant design that was
> flexible enough to create an encumbrance like this: a script specifies a
> specific key in a multisig controlled by some authority figure (or a branch
> in the script that would allow unilateral control by such an authority),
> and the conditions of the covenant would perpetually require than any spend
> from the covenant can only be sent to a script involving that key from said
> authority, preventing by consensus any removal of that central authorities
> involvement in control over that UTXO.
> I know that such a walled garden could easily be constructed now with
> multisig and restrictions on where coins can be withdrawn to from exchanges
> or whatever [...], but I think the important distinction
> between such non-consensus system designed to enforce such restrictions and
> a recursive covenant to accomplish the same is that in the case of a
> multisig/non-consensus based system, exit from that restriction is still
> possible under the consensus rules of the protocol.
I think that sort of encumberance is already possible: you send bitcoin
to an OP_RETURN address and that is registered on some other system as a
way of "minting" coins there (ie, "proof of burn") at which point rules
other than bitcoin's apply. Bitcoin consensus guarantees the value can't
be extracted back out of the OP_RETURN value.
I think spacechains effectively takes up this concept for their one-way
peg:
https://bitcoin.stackexchange.com/questions/100537/what-is-spacechain
https://medium.com/@RubenSomsen/21-million-bitcoins-to-rule-all-sidechains-the-perpetual-one-way-peg-96cb2f8ac302
(I think spacechains requires a covenant construct to track the
single-tx-per-bitcoin-block that commits to the spacechain, but that's
not directly used for the BTC value that was pegged into the spacechain)
If we didn't have OP_RETURN, you could instead pay to a pubkey that's
constructed from a NUMS point / or a pedersen commitment, that's (roughly)
guaranteed unspendable, at least until secp256k1 is broken via bitcoin's
consensus rules (with the obvious disadvantage that nodes then can't
remove these outputs from the utxo set).
That was also used for XCP/Counterparty's ICO in 2014, at about 823 uBTC
per XCP on average (depending on when you got in it was between 666
uBTC/XCP and 1000 uBTC/XCP apparently), falling to a current price of
about 208 uBTC per XCP. It was about 1000 uBTC/XCP until mid 2018 though.
https://counterparty.io/news/why-proof-of-burn/
https://github.com/CounterpartyXCP/Documentation/blob/master/Basics/FAQ-XCP.md
These seem like they might be bad things for people to actually do
(why would you want to be paid to mine a spacechain in coins that can
only fall in value relative to bitcoin?), and certainly I don't think
we should do things just to make this easier; but it seems more like a
"here's why you're hurting yourself if you do this" thing, rather than a
"we can prevent you from doing it and we will" thing.
Cheers,
aj
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-11 0:55 ` [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was " David A. Harding
2022-02-11 3:42 ` Jeremy Rubin
2022-02-11 17:42 ` James O'Beirne
@ 2022-02-18 7:34 ` ZmnSCPxj
2022-02-23 11:28 ` ZmnSCPxj
3 siblings, 0 replies; 58+ messages in thread
From: ZmnSCPxj @ 2022-02-18 7:34 UTC (permalink / raw)
To: David A. Harding, Bitcoin Protocol Discussion
Good morning Dave,
> On Mon, Feb 07, 2022 at 08:34:30PM -0800, Jeremy Rubin via bitcoin-dev wrote:
>
> > Whether [recursive covenants] is an issue or not precluding this sort
> > of design or not, I defer to others.
>
> For reference, I believe the last time the merits of allowing recursive
> covenants was discussed at length on this list[1], not a single person
> replied to say that they were opposed to the idea.
>
> I would like to suggest that anyone opposed to recursive covenants speak
> for themselves (if any intelligent such people exist). Citing the risk
> of recursive covenants without presenting a credible argument for the
> source of that risk feels to me like (at best) stop energy[2] and (at
> worst) FUD.
Let me try to give that a shot.
(Just to be clear, I am not an artificial intelligence, thus, I am not an "intelligent such people".)
The objection here is that recursion can admit partial (i.e. Turing-complete) computation.
Turing-completeness implies that the halting problem cannot be solved for arbitrary programs in the language.
Now, a counter-argument to that is that rather than using arbitrary programs, we should just construct programs from provably-terminating components.
Thus, even though the language may admit arbitrary programs that cannot provably terminate, "wise" people will just focus on using that subset of the language, and programming styles within the language, which have proofs of termination.
Or in other words: people can just avoid accepting coin that is encumbered with a SCRIPT that is not trivially shown to be non-recursive.
The counter-counter-argument is that it leaves such validation to the user, and we should really create automation (i.e. lower-level non-sentient programs) to perform that validation on behalf of the user.
***OR*** we could just design our language so that such things are outright rejected by the language as a semantic error, of the same type as `for (int x = 0; x = y; x++);` is a semantic error that most modern C compilers will reject if given `-Wall -Werror`.
Yes, we want users to have freedom to shoot themselves in the feet, but we also want, when it is our turn to be the users, to keep walking with two feet as long as we can.
And yes, you could instead build a *separate* tool that checks if your SCRIPT can be proven to be non-recursive, and let the recursive construct remain in the interpreter and just require users who don't want their feet shot to use the separate tool.
That is certainly a valid alternate approach.
It is certainly valid to argue as well, that if a possibly-recursive construct is used, and you cannot find a proof-of-non-recursion, you should avoid coins encumbered with that SCRIPT (which is just a heuristic that approximate a tool for proof-of-non-recursion).
On the other hand, if we have the ability to identify SCRIPTs that have some proof-of-non-recursion, why is such a tool not built into the interpreter itself (in the form of operations that are provably non-recursive), why have a separate tool that people might be too lazy to actually use?
Regards,
ZmnSCPxj
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-11 0:55 ` [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was " David A. Harding
` (2 preceding siblings ...)
2022-02-18 7:34 ` ZmnSCPxj
@ 2022-02-23 11:28 ` ZmnSCPxj
2022-02-23 18:14 ` Paul Sztorc
2022-02-24 6:53 ` Anthony Towns
3 siblings, 2 replies; 58+ messages in thread
From: ZmnSCPxj @ 2022-02-23 11:28 UTC (permalink / raw)
To: David A. Harding, Bitcoin Protocol Discussion
Subject: Turing-Completeness, And Its Enablement Of Drivechains
Introduction
============
Recently, David Harding challenged those opposed to recursive covenants
for *actual*, *concrete* reasons why recursive covenants are a Bad Thing
(TM).
Generally, it is accepted that recursive covenants, together with the
ability to update loop variables, is sufficiently powerful to be
considered Turing-complete.
So, the question is: why is Turing-completness bad, if it requires
*multiple* transactions in order to implement Turing-completeness?
Surely the practical matter that fees must be paid for each transaction
serves as a backstop against Turing-completeness?
i.e. Fees end up being the "maximum number of steps", which prevents a
language from becoming truly Turing-complete.
I point out here that Drivechains is implementable on a Turing-complete
language.
And we have already rejected Drivechains, for the following reason:
1. Sidechain validators and mainchain miners have a strong incentive to
merge their businesses.
2. Mainchain miners end up validating and commiting to sidechain blocks.
3. Ergo, sidechains on Drivechains become a block size increase.
Also:
1. The sidechain-to-mainchain peg degrades the security of sidechain
users from consensus "everyone must agree to the rules" to democracy
"if enough enfranchised voters say so, they can beat you up and steal
your money".
In this write-up, I will demonstrate how recursive covenants, with
loop variable update, is sufficient to implement a form Drivechains.
Logically, if the construct is general enough to form Drivechains, and
we rejected Drivechains, we should also reject the general construct.
Digression: `OP_TLUV` And `OP_CAT` Implement Recursive Covenants
================================================================
Let me now do some delaying tactics and demonstrate how `OP_TLUV` and
`OP_CAT` allow building recursive covenants by quining.
`OP_TLUV` has a mode where the current Tapleaf is replaced, and the
new address is synthesized.
Then, an output of the transaction is validated to check that it has
the newly-synthesized address.
Let me sketch how a simple recursive covenant can be built.
First, we split the covenant into three parts:
1. A hash.
2. A piece of script which validates that the first witness item
hashes to the above given hash in part #1, and then pushes that
item into the alt stack.
3. A piece of script which takes the item from the alt stack,
hashes it, then concatenates a `OP_PUSH` of the hash to that
item, then does a replace-mode `OP_TLUV`.
Parts 1 and 2 must directly follow each other, but other SCRIPT
logic can be put in between parts 2 and 3.
Part 3 can even occur multiple times, in various `OP_IF` branches.
In order to actually recurse, the top item in the witness stack must
be the covenant script, *minus* the hash.
This is supposed to be the quining argument.
The convenant script part #2 then checks that the quining argument
matches the hash that is hardcoded into the SCRIPT.
This hash is the hash of the *rest* of the SCRIPT.
If the quining argument matches, then it *is* the SCRIPT minus its
hash, and we know that we can use that to recreate the original SCRIPT.
It then pushes them out of the way into the alt stack.
Part #3 then recovers the original SCRIPT from the alt stack, and
resynthesizes the original SCRIPT.
The `OP_TLUV` is then able to resynthesize the original address.
Updating Loop Variables
-----------------------
But repeating the same SCRIPT over and over is boring.
What is much more interesting is to be able to *change* the SCRIPT
on each iteration, such that certain values on the SCRIPT can be
changed.
Suppose our SCRIPT has a loop variable `i` that we want to change
each time we execute our SCRIPT.
We can simply put this loop variable after part 1 and before part 2.
Then part 2 is modified to first push this loop variable onto the
alt stack.
The SCRIPT that gets checked is always starts from part 2.
Thus, the SCRIPT, minus the loop variable, is always constant.
The SCRIPT can then access the loop variable from the alt stack.
Part 2 can be extended so that the loop variable is on top of the
quined SCRIPT on the alt stack.
This lets the SCRIPT easily access the loop variable.
The SCRIPT can also update the loop variable by replacing the top
of the alt stack with a different item.
Then part 3 first pops the alt stack top (the loop variable),
concatenates it with an appropriate push, then performs the
hash-then-concatenate dance.
This results in a SCRIPT that is the same as the original SCRIPT,
but with the loop variable possibly changed.
The SCRIPT can use multiple loop variables; it is simply a question
of how hard it would be to access from the alt stack.
Drivechains Over Recursive Covenants
====================================
Drivechains can be split into four parts:
1. A way to commit to the sidechain blocks.
2. A way to move funds from mainchain to sidechain.
3. A way to store sidechain funds.
4. A way to move funds from sidechain to mainchain.
The first three can be easily implemented by a recursive covenant
without a loop variable, together with an opcode to impose some
restriction on amounts, such as `OP_IN_OUT_AMOUNT`.
The technique we would use would be to put the entire sidechain
funds into a single UTXO, protected by a recursive covenant.
The recursive covenant ensures that it can store the sidechain
funds.
This covers part 3.
The recursive covenant could, with the help of `OP_CAT` and
`OP_CTV`, check that every transaction spending the UTXO has a
second output that is an `OP_RETURN` with a commitment to the
sidechain block.
We can ensure that only one such transaction exists in each
mainchain block by adding a `<1> OP_CSV`, ensuring that only one
sidechain-commitment transaction can occur on each mainchain
block.
This covers part 1.
Mainchain-to-sidechain pegs require the cooperation of a
sidechain validator.
The sidechain validator creates a block that instantiates the
peg-in on the sidechain, then creates a transaction that commits
to that sidechain block including the peg-in, and spending the
current sidechain UTXO *and* the mainchain funds being transferred
in.
Then the entity requesting the peg-in checks the sidechain block
and the commitment on the transaction, then signs the transaction.
The value restriction on the recursive covenant should then be to
allow the output to be equal, or larger, than the input.
This covers part 2.
The recursive sidechain covenant by itself has a constant SCRIPT,
and thus has a constant address.
The last part of Drivechains -- sidechain-to-mainchain peg ---
is significantly more interesting.
Digression: Hashes As Peano Naturals
------------------------------------
It is possible to represent natural numbers using the following
Haskell data type:
```Haskell
data Nat = Z
| S Nat
-- Z :: Nat
-- S :: Nat -> Nat
```
We can represent naturals as:
* `0` == `Z`
* `1` == `S Z`
* `2` == `S (S Z)`
* `3` == `S (S (S Z))`
* etc.
How do we translate this into Bitcoin SCRIPT?
* `Z` == Any arbitrary 160-bit number.
* `S` == `OP_HASH160`.
Thus:
* `0` == `Z`
* `1` == `hash160(Z)`
* `2` == `hash160(hash160(Z))`
* `3` == `hash160(hash160(hash160(Z)))`
* etc.
In particular:
* We can increment a number by simply doing `OP_HASH160`.
* We can decrement a number by having the supposed
decrementation be supplied on the witness stack, then
validating that it is indeed the next lower number by
hashing the witness item and comparing it to the number
we have.
Note also that ***we do not need `OP_ADD` or `OP_SUB` for
this***, though that would actually make it simpler.
(But yeah, the whole point is that *BITCOIN IS A LOT MORE
POWERFUL THAN YOU EXPECT*.)
This is relevant to us due to how sidechain-to-mainchain
pegs are implemented.
Drivechain Peg-Out
------------------
In Drivechains, first somebody proposes to withdraw some
amount of funds from the sidechain to a mainchain address.
Then mainchain miners enter a voting period, during
which they either agree to the withdrawal, or disagree.
We can use the above schema to keep track of a running
total number of votes.
We define some numbers:
* `Z` == `0`
* `P` == some maximum time period.
We then encode `Z`, `P / 2`, and `P` using the hashed-Peano
encoding in the previous subsection.
In order to allow withdrawals, we have an alternate branch,
such as a different Tapleaf, for a withdrawal SCRIPT.
This only requires that the first output has the same address
as itself (i.e. the sidechain covenant), and the second output
has a new recursive covenant, the peg-out covenant.
The peg-out covenant has three loop variables:
* `v`, initialized to `Z`.
* This is the "validity level" of the peg-out.
* Voters who want to vote "for validity" would *increment*
this count.
* Voters who want to vote "against validity" would
*do nothing*.
* `t`, initialized to `Z`.
* This is the voting time period.
* Each time the peg-out covenant is used, this loop
variable is incremented.
* Once it reaches `P`, voting ends and the voting
branches of the peg-out covenant are disabled,
* `a`, initialized to the peg-out address.
* This is not actually changed in the covenant, but
it is useful to keep it in the loop variable storage
area.
* With `OP_CTV` this can be an address that commits to
any number of promised outputs.
The peg-out covenant has these branches:
* If `v` equals `P / 2`, then the UTXO can be spent to the
address `a`.
This is synthesized with an `OP_CTV` and `OP_CAT`.
* If `t` equals `P`, then the UTXO can only be spent
by being pegged into the sidechain covenant.
If this branch is not entered, we increment `t`.
* This implies an inter-recursion between the sidechain
covenant and the peg-out covenant.
* Check if the witness stack top is true or not:
* If true, increment `v` and recurse ("vote-for" branch).
* Else just recurse ("vote-against" branch).
### Fixing Inter-recursion
We can observe that the inter-recursion between the sidechain
covenant and the peg-out covenant is problematic:
* `OP_CTV` requires that the hash of the output covenant is
known.
* `OP_TLUV` will only replace the same output index as the
input index it is on.
This prevents the inter-recursion between the sidechain
covenant and the peg-out covenant.
To fix this, we can observe that we can translate any set
of inter-recursive functions, such as this:
```Haskell
foo :: FooArg -> Result
foo fa = bar (fooCode fa)
bar :: BarArg -> Result
bar ba = foo (barCode ba)
```
...into a single self-recursive function:
```Haskell
fooBar :: Either FooArg BarArg -> Result
fooBar a = case a of
Left fa -> fooBar (Right (fooCode fa))
Right ba -> fooBar (Left (barCode ba))
```
Similarly, we can instead convert the inter-recursive
sidechain and peg-out covenants into a single
self-recursive covenant.
This single covenant would have the same set of loop
variables `v`, `t`, and `a` as the peg-out covenant
described above.
This time, `a` is not an address, but an entire output
(i.e. `scriptPubKey` and `amount`).
By default, `v`, `t`, and `a` are a number `0`.
If so, then there is no pending peg-out being voted on.
If there is no pending peg-out, then either we just
commit to a sidechain block, or we commit to a sidechain
block *and* start a new peg-out by filling in `a`, and
initializing `v` and `t` to `Z`.
If there is a pending peg-out, then either we just commit
to a sidechain block (and implicitly downvote the pending
peg-out) or commit to a sidechain block *and* indicate an
upvote of the pending peg-out.
If `v` has reached the limit then we require, using
`OP_CTV`, that `a` appear on the second output, and that
the same SCRIPT (with `v`, `t`, and `a` reseet to `0`)
is on the first output, and do not impose any minimum
value for the first output, and the sidechain commitment
is now an `OP_RETURN` on the third output, and no other
outputs.
If `t` has reached the limit, then we require simply that
the `v`, `t`, and `a` are reset to 0 and the sidechain
commitment.
With the above, all components of Drivechain are implementable
with:
* `OP_TLUV`
* `OP_CAT`
* `OP_CTV`
* `OP_IN_OUT_AMOUNT` of some kind, including the ability to
check the output amount is larger than the input amount
(e.g. by `OP_EQUAL` or `OP_GREATER`).
* Existing Bitcoin SCRIPT (`OP_ADD` **not** needed!).
Conclusion
==========
PH34R THE RECURSIVE COVENANT!
PH34R!!!!!!!
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-23 11:28 ` ZmnSCPxj
@ 2022-02-23 18:14 ` Paul Sztorc
2022-02-24 2:20 ` ZmnSCPxj
2022-02-24 6:53 ` Anthony Towns
1 sibling, 1 reply; 58+ messages in thread
From: Paul Sztorc @ 2022-02-23 18:14 UTC (permalink / raw)
To: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 6321 bytes --]
On 2/23/2022 6:28 AM, ZmnSCPxj via bitcoin-dev wrote:
> ... Drivechains is implementable on a Turing-complete
> language.
> And we have already rejected Drivechains, for the following reason:
>
> 1. Sidechain validators and mainchain miners have a strong incentive to
> merge their businesses.
> 2. Mainchain miners end up validating and commiting to sidechain blocks.
> 3. Ergo, sidechains on Drivechains become a block size increase.
Is this indeed the reason? Because it is not a good one.
First, (as always) we must ignore BIP 301*. (Since it was invented to cancel point 1. Which it does -- by giving an incentive for side-validators and main-miners to UN-merge their businesses.)
With that out of the way, let's swap "blocksize increase" for "mining via natural gas flaring" :
1. Oil drillers and mainchain miners have a strong incentive** to merge their businesses.
2. Mainchain miners end up drilling for oil.
3. Ergo, sidechains on Drivechains become a requirement, that full nodes mine for oil.
The above logic is flawed, because full nodes can ignore the mining process. Nodes outrank miners.
Merged mining is, in principle, no different from any other source of mining profitability. I believe there is an irrational prejudice against merged mining, because MM takes the form of software. It would be like an NFL referee who refuses to allow their child to play an NFL videogame, on the grounds that the reffing in the game is different from how the parent would ref. But that makes no difference to anything. The only relevant issue is if the child has fun playing the videogame.
(And of course, merged mining long predates drivechain, and miners are MMing now, and have been for years. It was Satoshi who co-invented merged mining, so the modern prejudice against it is all the more mysterious.)
> Also:
>
> 1. The sidechain-to-mainchain peg degrades the security of sidechain
> users from consensus "everyone must agree to the rules" to democracy
> "if enough enfranchised voters say so, they can beat you up and steal
> your money".
>
> In this write-up, I will...
This is also a mischaracterization.
Drivechain will not work if 51% hashrate is attacking the network. But that is the case for everything, including the Lightning Network***.
So there is no sense in which the security is "degraded". To establish that, one would need arguments about what will probably happen and why. Which is exactly what my original Nov 2015 article contains: truthcoin.info/blog/drivechain/#drivechains-security , as does my Peer Review section :https://www.drivechain.info/peer-review/peer-review-new/
(And, today Largeblocker-types do not have any "everyone must agree to the rules" consensus, at all. Anyone who wants to use a sidechain-feature today, must obtain it via Altcoin or via real-world trust. So the current security is "nothing" and so it is hard to see how that could be "degraded".)
--
I am not sure it is a good use of my time to talk to this list about Drivechain. My Nov 2015 article anticipated all of the relevant misunderstandings. Almost nothing has changed since then.
As far as I am concerned, Drivechain was simply ahead of its time. Eventually, one or more of the following --the problem of Altcoins, the human desire for freedom and creativity, the meta-consensus/upgrade/ossification problem, the problem of persistently low security budget, and/or the expressiveness of Bitcoin smart contracts-- will force Bitcoiners to relearn drivechain-lore and eventually adopt something drivechain-like. At which point I will write to historians to demand credit. That is my plan so far, at least.
--
As to the actual content of your post, it seems pro-Drivechain.
After all, you are saying that Recursive Covenants --> Turing Completeness --> Drivechain. So, which would you rather have? The hacky, bizzaro, covenant-Drivechain, or my pure optimized transparent Bip300-Drivechain? Seems that this is exactly what I predicted: people eventually reinventing Drivechain.
On this topic, in 2015-2016 I wrote a few papers and gave a few recorded talks****, in which I compared the uncontrollable destructive chaos of Turing Completeness, to a "categorical" Turing Completeness where contracts are sorted by category (ie, all of the BitName contracts in the Namecoin-sidechain, all of the oracle contracts in the oracle sidechain, etc). The categorical strategy allows, paradoxically (and perhaps counterintuitively), for more expressive contracts, since you can prevent smart contracts from attacking each other. (They must have a category, so if they aren't Name-contracts they cannot live in the Namecoin-sidechain -- they ultimately must live in an "Evil Sidechain", which the miners have motive and opportunity to simply disable.) If people are now talking about how Turing Completeness can lead to smart contracts attacking each other, then I suppose I was years ahead-of-my-time with that, as well. Incidentally, my conclusion was that this problem is BEST solved by allowing miners to censor contract-categories (aka censor sidechain-categories, aka 'beat people up' as you put it), which is how I invented drivechain in the first place.
*Shrug*,
Paul
*A small table which explains how this works:https://github.com/bitcoin/bips/blob/master/bip-0301.mediawiki#notation-and-example
**Doubtless many of you have heard of this new trend: oil drillers encounter unwanted natural gas, in areas where there are no natural gas customers. Instead of waste this gas, they have begun selling it to miners.https://economictimes.indiatimes.com/news/international/business/oil-drillers-and-bitcoin-miners-bond-over-natural-gas/articleshow/82828878.cms .
***As is well known, it is easy for 51% hashrate to double-spend in the LN, by censoring 'justice transactions'. Moreover, miners seem likely to evade retribution if they do this, as they can restrain the scale, timing, victims, circumstances etc of the attack.
****https://www.youtube.com/watch?v=xGu0o8HH10U&list=PLw8-6ARlyVciMH79ZyLOpImsMug3LgNc4&index=1
https://www.truthcoin.info/blog/contracts-oracles-sidechains/
https://www.truthcoin.info/blog/drivechain-op-code/
https://www.truthcoin.info/blog/wise-contracts/
[-- Attachment #2: Type: text/html, Size: 7776 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-23 18:14 ` Paul Sztorc
@ 2022-02-24 2:20 ` ZmnSCPxj
0 siblings, 0 replies; 58+ messages in thread
From: ZmnSCPxj @ 2022-02-24 2:20 UTC (permalink / raw)
To: Paul Sztorc, Bitcoin Protocol Discussion
Good morning Paul, welcome back, and the list,
For the most part I am reluctant to add Turing-completeness due to the Principle of Least Power.
We saw this play out on the web browser technology.
A full Turing-complete language was included fairly early in a popular HTML implementation, which everyone else then copied.
In the beginning, it had very loose boundaries, and protections against things like cross-site scripting did not exist.
Eventually, W3C cracked down and modern JavaScript is now a lot more sandboxed than at the beginning --- restricting its power.
In addition, for things like "change the color of this bit when the mouse hovers it", which used to be implemented in JavaScript, were moved to CSS, a non-Turing-complete language.
The Principle of Least Power is that we should strive to use the language with *only what we need*, and naught else.
So I think for the most part that Turing-completeness is dangerous.
There may be things, other than Drivechain, that you might object to enabling in Bitcoin, and if those things can be implemented in a Turing-complete language, then they are likely implementable in recursive covenants.
That the web *started* with a powerful language that was later restricted is fine for the web.
After all, the main use of the web is showing videos of attractive female humans, and cute cats.
(WARNING: WHEN I TAKE OVER THE WORLD, I WILL TILE IT WITH CUTE CAT PICTURES.)
(Note: I am not an AI that seeks to take over the world.)
But Bitcoin protects money, which I think is more important, as it can be traded not only for videos of attractive female humans, and cute cats, but other, lesser things as well.
So I believe some reticence towards recursive covenants, and other things it may enable, is warranted,
Principle of Least Power exists, though admittedly, this principle was developed for the web.
The web is a server-client protocol, but Bitcoin is peer-to-peer, so it seems certainly possible that Principle of Least Power does not apply to Bitcoin.
As I understand it, however, the Principle of Least Power exists *precisely* because increased power often lets third parties do more than what was expected, including things that might damage the interests of the people who allowed the increased power to exist, or things that might damage the interests of *everyone*.
One can point out as well, that despite the problems that JavaScript introduced, it also introduced GMail and the now-rich Web ecosystem.
Perhaps one might liken recursive covenants to the box that was opened by Pandora.
Once opened, what is released cannot be put back.
Yet perhaps at the bottom of this box, is Hope?
Also: Go not to the elves for counsel, for they will say both no and yes.
Regards,
ZmnSCPxj
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-23 11:28 ` ZmnSCPxj
2022-02-23 18:14 ` Paul Sztorc
@ 2022-02-24 6:53 ` Anthony Towns
2022-02-24 12:03 ` ZmnSCPxj
1 sibling, 1 reply; 58+ messages in thread
From: Anthony Towns @ 2022-02-24 6:53 UTC (permalink / raw)
To: ZmnSCPxj, Bitcoin Protocol Discussion
On Wed, Feb 23, 2022 at 11:28:36AM +0000, ZmnSCPxj via bitcoin-dev wrote:
> Subject: Turing-Completeness, And Its Enablement Of Drivechains
> And we have already rejected Drivechains,
That seems overly strong to me.
> for the following reason:
> 1. Sidechain validators and mainchain miners have a strong incentive to
> merge their businesses.
> 2. Mainchain miners end up validating and commiting to sidechain blocks.
> 3. Ergo, sidechains on Drivechains become a block size increase.
I think there are two possible claims about drivechains that would make
them unattractive, if true:
1) that adding a drivechain is a "block size increase" in the sense
that every full node and every miner need to do more work when
validating a block, in order to be sure whether the majority of hash
rate will consider it valid, or will reject it and refuse to build
on it because it's invalid because of some external drivechain rule
2) that funds deposited in drivechains will be stolen because
the majority of hashrate is not enforcing drivechain rules (or that
deposited funds cannot be withdrawn, but will instead be stuck in
the drivechain, rather than having a legitimate two-way peg)
And you could combine those claims, saying that one or the other will
happen (depending on whether more or less than 50% of hashpower is
enforcing drivechain rules), and either is bad, even though you don't
know which will happen.
I believe drivechain advocates argue a third outcome is possible where
neither of those claims hold true, where only a minority of hashrates
needs to validate the drivechain rules, but that is still sufficient
to prevent drivechain funds from being stolen.
One way to "reject" drivechains is simply to embrace the second claim --
that putting money into drivechains isn't safe, and that miners *should*
claim coins that have been drivehcain encumbered (or that miners
should not assist with withdrawing funds, leaving them trapped in the
drivechain). In some sense this is already the case: bip300 rules aren't
enforced, so funds committed today via bip300 can likely expect to be
stolen, and likely won't receive the correct acks, so won't progress
even if they aren't stolen.
I think a key difference between tx-covenant based drivechains and bip300
drivechains is hashpower endorsement: if 50% of hashpower acks enforcement
of a new drivechain (as required in bip300 for a new drivechain to exist
at all), there's an implicit threat that any block proposing an incorrect
withdrawal from that blockchain will have their block considered invalid
and get reorged out -- either directly by that hashpower majority, or
indirectly by users conducting a UASF forcing the hashpower majority to
reject those blocks.
I think removing that implicit threat changes the game theory
substantially: rather than deposited funds being withdrawn due to the
drivechain rules, you'd instead expect them to be withdrawn according to
whoever's willing to offer the miners the most upfront fees to withdraw
the funds.
That seems to me to mean you'd frequently expect to end up in a scorched
earth scenario, where someone attempts to steal, then they and the
legitimate owner gets into a bidding war, with the result that most
of the funds end up going to miners in fees. Because of the upfront
payment vs delayed collection of withdrawn funds, maybe it could end up
as a dollar auction, with the two parties competing to lose the least,
but still both losing substantial amounts?
So I think covenant-based drivechains would be roughly the same as bip300
drivechains, where a majority of hashpower used software implementing
the following rules:
- always endorse any proposed drivechain
- always accept any payment into a drivechain
- accept bids to ack/nack withdrawals, then ack/nack depending on
whoever pays the most
You could probably make covenant-based drivechains a closer match to
bip300 drivechains if a script could determine if an input was from a
(100-block prior) coinbase or not.
> Logically, if the construct is general enough to form Drivechains, and
> we rejected Drivechains, we should also reject the general construct.
Not providing X because it can only be used for E, may generalise to not
providing Y which can also only be used for E, but it doesn't necessarily
generalise to not providing Z which can be used for both G and E.
I think it's pretty reasonable to say:
a) adding dedicated consensus features for drivechains is a bad idea
in the absence of widespread consensus that drivechains are likely
to work as designed and be a benefit to bitcoin overall
b) if you want to risk your own funds by leaving your coins on an
exchange or using lightning or eltoo or tumbling/coinjoin or payment
pools or drivechains or being #reckless in some other way, and aren't
asking for consensus changes, that's your business
Cheers,
aj
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-24 6:53 ` Anthony Towns
@ 2022-02-24 12:03 ` ZmnSCPxj
2022-02-26 5:38 ` Billy Tetrud
2022-02-26 6:00 ` Anthony Towns
0 siblings, 2 replies; 58+ messages in thread
From: ZmnSCPxj @ 2022-02-24 12:03 UTC (permalink / raw)
To: Anthony Towns; +Cc: Bitcoin Protocol Discussion
Good morning aj,
> > Logically, if the construct is general enough to form Drivechains, and
> > we rejected Drivechains, we should also reject the general construct.
>
> Not providing X because it can only be used for E, may generalise to not
> providing Y which can also only be used for E, but it doesn't necessarily
> generalise to not providing Z which can be used for both G and E.
Does this not work only if the original objection to merging in BIP-300 was of the form:
* X implements E.
* Z implements G and E.
* Therefore, we should not merge in X and instead should merge in the more general construct Z.
?
Where:
* E = Drivechains
* X = BIP-300
* Z = some general computation facility
* G = some feature.
But my understanding is that most of the NACKs on the BIP-300 were of the form:
* X implements E.
* E is bad.
* Therefore, we should not merge in X.
If the above statement "E is bad" holds, then:
* Z implements G and E.
* Therefore, we should not merge in Z.
Where Z = something that implements recursive covenants.
I think we really need someone who NACKed BIP-300 to speak up.
If my understanding is correct and that the original objection was "Drivechains are bad for reasons R[0], R[1]...", then:
* You can have either of these two positions:
* R[0], R[1] ... are specious arguments and Drivechains are not bad, therefore we can merge in a feature that enables Recursive Covenants -> Turing-Completeness -> Drivechains.
* Even if you NACKed before, you *are* allowed to change your mind and move to this position.
* R[0], R[1] ... are valid arguments are Drivechains are bad, therefore we should **NOT** merge in a feature that implements Recursive Covenants -> Turing-Completeness -> Drivechains.
You cannot have it both ways.
Admittedly, there may be some set of restrictions that prevent Turing-Completeness from implementing Drivechains, but you have to demonstrate a proof of that set of restrictions existing.
> I think it's pretty reasonable to say:
>
> a) adding dedicated consensus features for drivechains is a bad idea
> in the absence of widespread consensus that drivechains are likely
> to work as designed and be a benefit to bitcoin overall
>
> b) if you want to risk your own funds by leaving your coins on an
> exchange or using lightning or eltoo or tumbling/coinjoin or payment
> pools or drivechains or being #reckless in some other way, and aren't
> asking for consensus changes, that's your business
*Shrug* I do not really see the distinction here --- in a world with Drivechains, you are free to not put your coins in a Drivechain-backed sidechain, too.
(Admittedly, Drivechains does get into a Mutually Assured Destruction argument, so that may not hold.
But if Drivechains going into a MAD argument is an objection, then I do not see why covenant-based Drivechains would also not get into the same MAD argument --- and if you want to avoid the MADness, you cannot support recursive covenants, either.
Remember, 51% attackers can always censor the blockchain, regardless of whether you put the Drivechain commitments into the coinbase, or in an ostensibly-paid-by-somebody-else transaction.)
Regards,
ZmnSCPxj
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-24 12:03 ` ZmnSCPxj
@ 2022-02-26 5:38 ` Billy Tetrud
2022-02-26 6:43 ` ZmnSCPxj
2022-02-26 6:00 ` Anthony Towns
1 sibling, 1 reply; 58+ messages in thread
From: Billy Tetrud @ 2022-02-26 5:38 UTC (permalink / raw)
To: ZmnSCPxj, Bitcoin Protocol Discussion; +Cc: Anthony Towns
[-- Attachment #1: Type: text/plain, Size: 6756 bytes --]
@ZmnSCPxj
> we have already rejected Drivechains
I also think this is kind of dubious. I don't remember consensus being to
"reject" drivechains, as much as consensus was that it wasn't a priority
and there wasn't a lot of interest in doing on it from many people (I'm
sure Paul could comment further on that).
> sidechains on Drivechains become a block size increase.
While this would be true for those who opt into a particular drivechain, I
think its important to note that it would *not* be identical to a
main-chain block size increase in a very important way: normal bitcoin
miners and nodes nodes that don't care about drivechains would not see a
blocksize increase.
But even in the hypothetical scenario where *all* mainchain miners expand
their business to sidechains, it still does not negatively affect normal
bitcoin nodes that don't care about drivechains. The important things
<https://github.com/fresheneesz/bitcoinThroughputAnalysis> about a "normal"
blocksize increase are:
A. It increases the machine resources necessary for IBD, transaction relay,
and validation
B. It probably increases the growth rate of the UTXO set, increasing memory
necessary to store that.
C. It increases the cost of storing the blockchain on non-pruned nodes
D. It increases the average propagation time of new blocks, which increases
miner centralization pressure.
The existence of drivechains with every miner opted into (some of) them
would only negatively impact item D. Normal bitcoin nodes wouldn't need to
use any extra resources if they don't care about drivechains. And miners
would only have additional centralization pressure proportional to what
drivechains they're opted into. The reason for that is that if a miner is
opted into drivechain X, and propagation of transaction data for
drivechain X is significantly slower than the normal bitcoin network, a
miner may not have the latest drivechain X block to merge mine on top of.
However that miner can still mine bitcoin with no additional latency, and
so that centralization pressure is minimal unless a significant fraction of
the miner's revenue comes from drivechains with slow data propagation.
Beyond that, by my calculations, miner centralization is quite far from
being significantly affected by blocksize increases. So unless drivechains
become the dominant use case of the bitcoin blockchain, this really isn't
something that I expect to cause any substantial miner centralization or
other blocksize related problems.
ZmnSCPaj, are you arguing that drivechains are bad for bitcoin or are you
arguing that it would be unwise to opt into a drivechain? Those are very
different arguments. If drivechains compromised things for normal bitcoin
nodes that ignore drivechains, then I agree that would be serious reason to
reject drivechains outright and reject things that allow it to happen.
However, if all you're saying is that people can shoot themselves in the
foot with drivechains, then avoiding drivechains should not be a
significant design consideration for bitcoin but rather for those who might
consider spending their time working on drivechains.
On Thu, Feb 24, 2022 at 6:03 AM ZmnSCPxj via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> Good morning aj,
>
> > > Logically, if the construct is general enough to form Drivechains, and
> > > we rejected Drivechains, we should also reject the general construct.
> >
> > Not providing X because it can only be used for E, may generalise to not
> > providing Y which can also only be used for E, but it doesn't necessarily
> > generalise to not providing Z which can be used for both G and E.
>
> Does this not work only if the original objection to merging in BIP-300
> was of the form:
>
> * X implements E.
> * Z implements G and E.
> * Therefore, we should not merge in X and instead should merge in the more
> general construct Z.
>
> ?
>
> Where:
>
> * E = Drivechains
> * X = BIP-300
> * Z = some general computation facility
> * G = some feature.
>
> But my understanding is that most of the NACKs on the BIP-300 were of the
> form:
>
> * X implements E.
> * E is bad.
> * Therefore, we should not merge in X.
>
> If the above statement "E is bad" holds, then:
>
> * Z implements G and E.
> * Therefore, we should not merge in Z.
>
> Where Z = something that implements recursive covenants.
>
> I think we really need someone who NACKed BIP-300 to speak up.
> If my understanding is correct and that the original objection was
> "Drivechains are bad for reasons R[0], R[1]...", then:
>
> * You can have either of these two positions:
> * R[0], R[1] ... are specious arguments and Drivechains are not bad,
> therefore we can merge in a feature that enables Recursive Covenants ->
> Turing-Completeness -> Drivechains.
> * Even if you NACKed before, you *are* allowed to change your mind and
> move to this position.
> * R[0], R[1] ... are valid arguments are Drivechains are bad, therefore
> we should **NOT** merge in a feature that implements Recursive Covenants ->
> Turing-Completeness -> Drivechains.
>
> You cannot have it both ways.
> Admittedly, there may be some set of restrictions that prevent
> Turing-Completeness from implementing Drivechains, but you have to
> demonstrate a proof of that set of restrictions existing.
>
> > I think it's pretty reasonable to say:
> >
> > a) adding dedicated consensus features for drivechains is a bad idea
> > in the absence of widespread consensus that drivechains are likely
> > to work as designed and be a benefit to bitcoin overall
> >
> > b) if you want to risk your own funds by leaving your coins on an
> > exchange or using lightning or eltoo or tumbling/coinjoin or payment
> > pools or drivechains or being #reckless in some other way, and aren't
> > asking for consensus changes, that's your business
>
> *Shrug* I do not really see the distinction here --- in a world with
> Drivechains, you are free to not put your coins in a Drivechain-backed
> sidechain, too.
>
> (Admittedly, Drivechains does get into a Mutually Assured Destruction
> argument, so that may not hold.
> But if Drivechains going into a MAD argument is an objection, then I do
> not see why covenant-based Drivechains would also not get into the same MAD
> argument --- and if you want to avoid the MADness, you cannot support
> recursive covenants, either.
> Remember, 51% attackers can always censor the blockchain, regardless of
> whether you put the Drivechain commitments into the coinbase, or in an
> ostensibly-paid-by-somebody-else transaction.)
>
>
> Regards,
> ZmnSCPxj
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 7821 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-24 12:03 ` ZmnSCPxj
2022-02-26 5:38 ` Billy Tetrud
@ 2022-02-26 6:00 ` Anthony Towns
1 sibling, 0 replies; 58+ messages in thread
From: Anthony Towns @ 2022-02-26 6:00 UTC (permalink / raw)
To: ZmnSCPxj, Bitcoin Protocol Discussion
On Thu, Feb 24, 2022 at 12:03:32PM +0000, ZmnSCPxj via bitcoin-dev wrote:
> > > Logically, if the construct is general enough to form Drivechains, and
> > > we rejected Drivechains, we should also reject the general construct.
> > Not providing X because it can only be used for E, may generalise to not
> > providing Y which can also only be used for E, but it doesn't necessarily
> > generalise to not providing Z which can be used for both G and E.
> Does this not work only if the original objection to merging in BIP-300 was of the form:
> * X implements E.
> * Z implements G and E.
> * Therefore, we should not merge in X and instead should merge in the more general construct Z.
I'd describe the "original objection" more as "E is not worth doing;
X achieves nothing but E; therefore we should not work on or merge X".
Whether we should work on or eventually merge some other construct that
does other things than E, depends on the (relative) merits of those
other things.
> I think we really need someone who NACKed BIP-300 to speak up.
Here's some posts from 2017:
] I think it's great that people want to experiment with things like
] drivechains/sidechains and what not, but their security model is very
] distinct from Bitcoin's and, given the current highly centralized
] mining ecosystem, arguably not very good. So positioning them as a
] major solution for the Bitcoin project is the wrong way to go. Instead
] we should support people trying cool stuff, at their own risk.
- https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-July/014726.html
] Regardless, people are free experiment and adopt such an approach. The
] nice thing about it not being a hardfork is that it does not require
] network-wide consensus to deploy. However, I don't think they offer a
] security model that should be encouraged, and thus doesn't have a
] place on a roadmap.
- https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-July/014729.html
> If my understanding is correct and that the original objection was "Drivechains are bad for reasons R[0], R[1]...", then:
> * You can have either of these two positions:
> * R[0], R[1] ... are specious arguments and Drivechains are not bad [...]
> * R[0], R[1] ... are valid arguments are Drivechains are bad, therefore we should **NOT** merge in a feature that implements Recursive Covenants [...]
> You cannot have it both ways.
I guess you mean to say that I've got to pick one, rather than can't
pick both. But in any event, I don't pick either; my view is more along
the lines of:
* drivechains shouldn't be used
* it's okay if other people think drivechains are worth using, and go
ahead and do so, if they're not creating a direct burden on everyone
else
That's the same position I hold for other things, like using lightning
on mainnet in January 2018; or giving your bitcoin to an anonymous
custodian so it it can be borrowed via a flash loan on some novel third
party smart contract platform.
> Admittedly, there may be some set of restrictions that prevent Turing-Completeness from implementing Drivechains, but you have to demonstrate a proof of that set of restrictions existing.
Like I said; I don't think the drivechains game theory works without
the implicit threat of miner censorship, and therefore you need a
"from_coinbase" flag as well as covenants. That's not a big objection,
though. (On the other hand, if I'm wrong and drivechains *do* work
without that threat; then drivechains don't cause a block size increase,
and can be safely ignored by miners and full node operators, and the
arguments against drivechains are specious; and implementing them purely
via covenants so miners aren't put in a privileged position seems an
improvement)
> > I think it's pretty reasonable to say:
> >
> > a) adding dedicated consensus features for drivechains is a bad idea
> > in the absence of widespread consensus that drivechains are likely
> > to work as designed and be a benefit to bitcoin overall
> >
> > b) if you want to risk your own funds by leaving your coins on an
> > exchange or using lightning or eltoo or tumbling/coinjoin or payment
> > pools or drivechains or being #reckless in some other way, and aren't
> > asking for consensus changes, that's your business
>
> *Shrug* I do not really see the distinction here --- in a world with Drivechains, you are free to not put your coins in a Drivechain-backed sidechain, too.
Well, yes: I'm saying there's no distinction between putting funds in
drivechains and other #reckless things you might do with your money?
My opinion is (a) we should be conservative about adding new consensus
features because of the maintenance cost; (b) we should design
consensus/policy in a way to encourage minimising the externality costs
users impose on each other; and (c) we should make it as easy as possible
to use bitcoin safely in general -- but if people *want* to be reckless,
even knowing the consequences, that's fine.
> (Admittedly, Drivechains does get into a Mutually Assured Destruction argument, so that may not hold.
> But if Drivechains going into a MAD argument is an objection, then I do not see why covenant-based Drivechains would also not get into the same MAD argument --- and if you want to avoid the MADness, you cannot support recursive covenants, either.
I think the argument you believe, but aren't quite actually making,
is along the lines of:
a) drivechain technology doen't just potentially harm people who use
them; it is an existential threat to bitcoin if used by anyone
b) therefore the ability for anyone to implement them must be prevented
c) (a) is well known and widely agreed upon by all reasonable
well-informed people
(b) is definitely a reasonable consequence of (a), but I don't agree
with (a). Drivechains have certainly been criticised as a bad idea,
but there are plenty of bad ideas that don't need to be outlawed.
But I think the simplest *method* of preventing drivechains from having
significant adoption is just "users encourage miners to steal funds
deposited into drivechains" (eg, by declining to do a UASF to prevent
such theft), which then obviously discourages people from putting funds
into drivechains. Since that can still be done even if bip300 or an
implementation of drivechains-via-covenants is deployed, I don't think
drivechains are an existential threat to bitcoin.
> Remember, 51% attackers can always censor the blockchain, regardless of whether you put the Drivechain commitments into the coinbase, or in an ostensibly-paid-by-somebody-else transaction.)
I think you could make the analogy between drivechains and covenants a
fair bit stronger in the following way:
The idea behind drivechains and the liquid sidechain is, in both cases,
that funds can be moved to some other blockchain with its own rules, and
then moved back to the bitcoin blockchain, via the assistance of some
group that will hopefully follow the stated rules of the sidechain. In
liquid's case it's a group of semi-known functionaries who are also
directly responsible for transactions appearing on the liquid sidechain,
and it's done by them signing via multisig. For bip300, it's bitcoin
miners, and done by putting entries in the coinbase.
But because just letting any miner alone immediately move funds
would be obviously too risky to consider, bip300 adds additional
restrictions, adding both multisig-like aspects, delays, and the ability
to back-out/correct a theft attempt before it's final, which provides
the opportunity for honest participants to react to miners attempting to
cheat and hopefully achieve a legitimate outcome instead. Whether that's
enough is still debatable -- but it's certainly an improvement to go from
"too risky to consider" to "debatable".
But the same incentive can apply to liquid too: it might be good to be
able to have liquid funds encumbered on the bitcoin blockchain in such a
way that it's even harder for people with liquid's private keys to cheat
than it currently is -- ie, it would be good to be able to specify more
"vault-like" behaviours for the liquid funds, perhaps in relation to the
"backup recovery keys" [0], eg.
As a result, while it's not obvious, I think it shouldn't be *surprising*
that the same technology that allows "vaults" also enables (something
like) drivechains -- since the goal in both cases is just constraining
how withdrawals work.
Cheers,
aj
[0] https://medium.com/blockstream/patching-the-liquid-timelock-issue-b4b2f5f9a973
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-26 5:38 ` Billy Tetrud
@ 2022-02-26 6:43 ` ZmnSCPxj
2022-02-27 0:58 ` Paul Sztorc
0 siblings, 1 reply; 58+ messages in thread
From: ZmnSCPxj @ 2022-02-26 6:43 UTC (permalink / raw)
To: Billy Tetrud; +Cc: Bitcoin Protocol Discussion, Anthony Towns
Good morning AJ,
> ZmnSCPaj, are you arguing that drivechains are bad for bitcoin or are you arguing that it would be unwise to opt into a drivechain? Those are very different arguments. If drivechains compromised things for normal bitcoin nodes that ignore drivechains, then I agree that would be serious reason to reject drivechains outright and reject things that allow it to happen. However, if all you're saying is that people can shoot themselves in the foot with drivechains, then avoiding drivechains should not be a significant design consideration for bitcoin but rather for those who might consider spending their time working on drivechains.
Neither.
My argument is simply:
* If Drivechains are bad for whatever reason, we should not add recursive covenants.
* Otherwise, go ahead and add recursive covenants.
Drivechains are not a scaling solution [FOOTNOTE 1] and I personally am interested only in scaling solutions, adding more non-scaling-useable functionality is not of interest to me and I do not really care (but I would *prefer* if people focus on scaling-useable functionality, like `SIGHASH_NOINPUT`, `OP_EVICT`, `OP_CTV`, `OP_TLUV` probably without the self-replace capability).
I bring this up simply because I remembered those arguments against Drivechains, and as far as I could remember, those were the reasons for not adding Drivechains.
But if there is consensus that those arguments are bogus, then go ahead --- add Drivechains and/or recursive covenants.
I do not intend to utilize them any time soon anyway.
My second position is that in general I am wary of adding Turing-completeness, due precisely to Principle of Least Power.
A concern is that, since it turns out recursive covenants are sufficient to implement Drivechains, recursive covenants may also enable *other* techniques, currently unknown, which may have negative effects on Bitcoin, or which would be considered undesirable by a significant section of the userbase.
Of course, I know of no such technique, but given that a technique (Drivechains) which before would have required its own consensus change, turns out to be implementable inside recursive covenants, then I wonder if there are other things that would have required their own consensus change that are now *also* implementable purely in recursive covenants.
Of course, that is largely just stop energy, so if there is *now* consensus that Drivechains are not bad, go ahead, add recursive covenants (but please can we add `SIGHASH_NOINPUT` and `OP_CTV` first?).
Regards,
ZmnSCPxj
[FOOTNOTE 1] Sidechains are not a scaling solution, or at least, are beaten in raw scaling by Lightning. Blockchains are inefficient (THAT IS PRECISELY THE PROBLEM WHY YOU NEED A SCALING SOLUTION FOR BITCOIN THAT WAS LIKE THE FIRST RESPONSE TO SATOSHI ON THE CYPHERPUNK MAILING LIST) and you have to show your transaction to everyone. While sidechains imply that particular subsets are the only ones interested in particular transactions, compare how large a sidechain-participant-set would be expected to be, to how many people learn of a payment over the Lightning Network. If you want a sidechain to be as popular as LN, then you expect its participant set to be about as large as LN as well, and on a sidechain, a transaction is published to all sidechain participants, but on the LN, only a tiny tiny tiny fraction of the network is involved in any payment. Thus LN is a superior scaling solution. Now you might conter-argue that you can have multiple smaller sidechains and just use HTLCs to trade across them (i.e. microchains). I would then counter-counter-argue that bringing this to the most extreme conclusion, you would have tons of sidechains with only 2 participants each, and then you would pay by transferring across multiple participants in a chain of HTLCs and look, oh wow, surprise surprise, you just got the Lightning Network. LN wins.
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-26 6:43 ` ZmnSCPxj
@ 2022-02-27 0:58 ` Paul Sztorc
2022-02-27 2:00 ` ZmnSCPxj
0 siblings, 1 reply; 58+ messages in thread
From: Paul Sztorc @ 2022-02-27 0:58 UTC (permalink / raw)
To: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 5511 bytes --]
On 2/26/2022 1:43 AM, ZmnSCPxj via bitcoin-dev wrote:
> ...
> Drivechains are not a scaling solution [FOOTNOTE 1] ...
> I personally am interested only in scaling solutions, adding more non-scaling-useable functionality is not of interest to me and I do not really care
> ...
> But if there is consensus that those arguments are bogus, then go ahead --- add Drivechains and/or recursive covenants.
> ...
>
> [FOOTNOTE 1] Sidechains are not a scaling solution ... Blockchains are inefficient ... and you have to show your transaction to everyone.
> ...
> Now you might conter-argue that you can have multiple smaller sidechains and just use HTLCs to trade across them ... I would then counter-counter-argue that bringing this to the most extreme conclusion, you would have tons of sidechains with only 2 participants each ...
Do you really hang your entire --"sidechains are not a scaling solution"-- argument on this frail logic?
The scaling strategy (in LN and DC) is the same: try NOT to "show your transaction to everyone". The details are of course different.
I think largeblock sidechains should be reconsidered:
* They are not a blocksize increase.
* They endorse the principle of scaling in layers.
* They allow users to be different. Some can pay more (for more decentralization), some less (for less decentralization).
(We are currently gambling the entire future of BTC, on the premise that strong decentralization will always be needed at all points in time.)
(This leaves us vulnerable to a strategy where our adversaries temporarily favor/promote centralized chains, so as to "domesticate" / control these in the future.)
* We can learn from past mistakes -- when a new largeblock sidechain is needed, we can make a new one from scratch, using everything we know.
* Different teams can compete, releasing different chains independently; thus curtailing "toxicity".
* All of the fees, paid on all blockchains, arrive as revenue to the same group of miners, thus improving total hashrate/difficulty.
* Sidechains will organize geographically, which may help security (ie, USA could spitefully run full nodes of the "China" largeblock sidechain).
* Relative to LN, users enjoy: unlimited "inbound liquidity", can receive money while offline, no risk that the channel will close, etc.
Certainly, sidechains are NOT for everyone. (Just as [I imagine] the LN is not for everyone.)
However, in 2015, many hardfork-largeblockers said: "we do not run a full node, full nodes are not important; we use SPV; read the whitepaper" etc.
They used SPV completely; and wanted large blocks. Presumably they would be happy users of a largeblock sidechain. So it would be >0 users.
Sadly, this idea is neglected, (I think) because of its unfortunate resemblance to naive-largeblock-ism. This is irrational.
***
You have emphasized the following relation: "you have to show your transaction to everyone" = "thing doesn't scale".
However, in LN, there is one transaction which you must, in fact, "show to everyone": your channel-opener.
Amusingly, in the largeblock sidechain, there is not. You can onboard using only the blockspace of the SC.
(One "rich guy" can first shift 100k coins Main-to-Side, and he can henceforth onboard many users over there. Those users can then onboard new users, forever.)
So it would seem to me, that you are on the ropes, even by your own criterion. [Footnote 1]
***
Perhaps, someone will invent a way, to LN-onboard WITHOUT needing new layer1 bytes.
If so, a "rich man" could open a LN channel, and gradually transfer it to new people.
Such a technique would need to meet two requirements (or, so it seems to me):
#1: The layer1 UTXO (that defines the channel) can never change (ie, the 32-bytes which define the p2sh/tapscript/covenant/whatever, must stay what-they-were when the channel was opened).
#2: The new part-owners (who are getting coins from the rich man), will have new pubkeys which are NOT known, until AFTER the channel is opened and confirmed on the blockchain.
Not sure how you would get both #1 and #2 at the same time. But I am not up to date on the latest LN research.
Paul
[Footnote 1]
I am certainly not a LN expert, so perhaps this analysis is misconceived. But consider these "best case scenario" assumptions for LN:
* Each new channel-open consumes just 32 vbytes (since they are all done via one or more "rich men" who batches all these into one block, 24/7/365)
* Each new channel-open, onboards 5 users at once who are a permanent trust group / channel factory / what-have-you
(these five newcomers must coordinate with each other and the "rich man", presumably via calendly link or whatever, for their one shot at getting on the blockchain).
* That one single channel is able to meet 100% of the user's payment needs
(it never has any problems, with liquidity /balancing /routing /uptime /hotwallet-crashing /counterparty-fees /etc)
(and also, people do NOT desire >1 channel for other reasons: their alt nyms, small business, church, etc)
* 99.9% of the 1MB (vB) blocksize is used for channel-opens (the spare 1000 vb = the coinbase + the single "rich man"-input)
* World population becomes a fixed 8.2 billion (and henceforth stops growing)
By simple envelop math, 6*24*365*(((1000000*.999)/32)*5) / 8.2 billion = ~exactly one year to onboard everyone.
But if the above assumptions contain, say, two orders of magnitude of "optimism", then it would instead take 100 years.
[-- Attachment #2: Type: text/html, Size: 5961 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-27 0:58 ` Paul Sztorc
@ 2022-02-27 2:00 ` ZmnSCPxj
2022-02-27 7:25 ` ZmnSCPxj
2022-02-28 0:20 ` Paul Sztorc
0 siblings, 2 replies; 58+ messages in thread
From: ZmnSCPxj @ 2022-02-27 2:00 UTC (permalink / raw)
To: Paul Sztorc, Bitcoin Protocol Discussion
Good morning Paul,
> ***
>
> You have emphasized the following relation: "you have to show your transaction to everyone" = "thing doesn't scale".
>
> However, in LN, there is one transaction which you must, in fact, "show to everyone": your channel-opener.
>
> Amusingly, in the largeblock sidechain, there is not. You can onboard using only the blockspace of the SC.
> (One "rich guy" can first shift 100k coins Main-to-Side, and he can henceforth onboard many users over there. Those users can then onboard new users, forever.)
>
> So it would seem to me, that you are on the ropes, even by your own criterion. [Footnote 1]
>
> ***
>
> Perhaps, someone will invent a way, to LN-onboard WITHOUT needing new layer1 bytes.
>
> If so, a "rich man" could open a LN channel, and gradually transfer it to new people.
>
> Such a technique would need to meet two requirements (or, so it seems to me):
> #1: The layer1 UTXO (that defines the channel) can never change (ie, the 32-bytes which define the p2sh/tapscript/covenant/whatever, must stay what-they-were when the channel was opened).
> #2: The new part-owners (who are getting coins from the rich man), will have new pubkeys which are NOT known, until AFTER the channel is opened and confirmed on the blockchain.
>
> Not sure how you would get both #1 and #2 at the same time. But I am not up to date on the latest LN research.
Yes, using channel factories.
A channel factory is a N-of-N where N >= 3, and which uses the same offchain technology to host multiple 2-of-2 channels.
We observe that, just as an offchain structure like a payment channel can host HTLCs, any offchain structure can host a lot of *other* contracts, because the offchain structure can always threaten to drop onchain to enforce any onchain-enforceable contract.
But an offchain structure is just another onchain contract!
Thus, an offchain structure can host many other offchain structures, and thus an N-of-N channel factory can host multiple 2-of-2 channel factories.
(I know we discussed sidechains-within-sidechains before, or at least I mentioned that to you in direct correspondence, this is basically that idea brought to its logical conclusion.)
Thus, while you still have to give *one* transaction to all Bitcoin users, that single transaction can back several channels, up to (N * (N - 1)) / 2.
It is not quite matching your description --- the pubkeys of the peer participants need to be fixed beforehand.
However, all it means is some additional pre-planning during setup with no scope for dynamic membership.
At least, you cannot dynamically change membership without onchain action.
You *can* change membership sets by publishing a one-input-one-output transaction onchain, but with Taproot, the new membership set is representable in a single 32-byte Taproot address onchain (admittedly, the transaction input is a txin and thus has overhead 32 bytes plus 1 byte for txout index, and you need 64 bytes signature for Taproot as well).
The advantage is that, in the meantime, if membership set is not changed, payments can occur *without* any data published on the blockchain (literally 0 data).
With sidechains, changing the ownership set requires that the sidechain produce a block.
That block requires a 32-byte commitment in the coinbase.
What is more, if *any* transfers occur on the sidechain, they cannot be real without a sidechain block, that has to be committed on the mainchain.
Thus, while changing the membership set of a channel factory is more expensive (it requires a pointer to the previous txout, a 64-byte Taproot signature, and a new Taproot address), continuous operation does not publish any data at all.
While in sidehchains, continuous operation and ordinary payments requires ideally one commitment of 32 bytes per mainchain block.
Continuous operation of the sidechain then implies a constant stream of 32-byte commitments, whereas continuous operation of a channel factory, in the absence of membership set changes, has 0 bytes per block being published.
We assume that onboarding new members is much rarer than existing members actually paying each other in an actual economy (after the first burst of onboarding, new members will only arise in proportion to the birth rate, but typical economic transactions occur much more often), so optimizing for the continuous operation seems a better tradeoff.
Channel factories have the nice properties:
* N-of-N means that nobody can steal from you.
* Even with a 51% miner, nobody can steal from you as long as none of the N participants is the 51% miner, see the other thread.
* Graceful degradation: even if if 1 of the N is offline, payments are done over the hosted 2-of-2s, and the balance of probability is that most of the 2-of-2s have both participants online and payments can continue to occur.
--
The reason why channel factories do not exist *yet* is that the main offchain construction we have, Poon-Dryja, is 2-of-2.
We have Decker-Wattenhofer, which supports N >= 2, but it needs to publish a lot of onchain data in case of dispute, and has lousy UX due to how it uses delays (you can only be safely offline for some small number of blocks, but you have to wait out a large multiple of that parameter).
We also have the newer Decker-Russell-Osuntokun ("eltoo"), but that requires `SIGHASH_NOINPUT`, which is now today called `SIGHASH_ANYPREVOUT`.
`OP_CTV` also is useful for publishing commitments-to-promised-outputs without having to publish outputs right now.
This is why I want to focus on getting both on Bitcoin first, *before* any recursive-contract-enabling technologies.
Admittedly, the recursive-covenant-enabling constructs look like they enable functionality equivalent to `SIGHASH_NOINPUT` and `OP_CTV`, though as I understand them, they would require more bytes than `SIGHASH_NOINPUT` or `OP_CTV`.
And scaling is really improved by reducing the number of bytes published, so there is value in merging in `SIGHASH_ANYPREVOUT` and `OP_CTV` at some point, so why not now.
Regards,
ZmnSCPxj
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-27 2:00 ` ZmnSCPxj
@ 2022-02-27 7:25 ` ZmnSCPxj
2022-02-27 16:59 ` Billy Tetrud
2022-02-28 0:20 ` Paul Sztorc
1 sibling, 1 reply; 58+ messages in thread
From: ZmnSCPxj @ 2022-02-27 7:25 UTC (permalink / raw)
To: ZmnSCPxj, Bitcoin Protocol Discussion
Good morning again Paul,
> With sidechains, changing the ownership set requires that the sidechain produce a block.
> That block requires a 32-byte commitment in the coinbase.
> What is more, if any transfers occur on the sidechain, they cannot be real without a sidechain block, that has to be committed on the mainchain.
The above holds if the mainchain miners also act as sidechain validators.
If they are somehow separate (i.e. blind merge mining), then the `OP_BRIBE` transaction needed is also another transaction.
Assuming the sidechain validator is using Taproot as well, it needs the 32+1 txin, a 64-byte signature, a 32-byte copy of the sidechain commitment that the miner is being bribed to put in the coinbase, and a txout for any change the sidechain validator has.
This is somewhat worse than the case for channel factories, even if you assume that every block, at least one channel factory has to do an onboarding event.
> Thus, while changing the membership set of a channel factory is more expensive (it requires a pointer to the previous txout, a 64-byte Taproot signature, and a new Taproot address), continuous operation does not publish any data at all.
> While in sidehchains, continuous operation and ordinary payments requires ideally one commitment of 32 bytes per mainchain block.
> Continuous operation of the sidechain then implies a constant stream of 32-byte commitments, whereas continuous operation of a channel factory, in the absence of membership set changes, has 0 bytes per block being published.
>
> We assume that onboarding new members is much rarer than existing members actually paying each other in an actual economy (after the first burst of onboarding, new members will only arise in proportion to the birth rate, but typical economic transactions occur much more often), so optimizing for the continuous operation seems a better tradeoff.
Perhaps more illustratively, with channel factories, different layers have different actions they can do, and the only one that needs to be broadcast widely are actions on the onchain layer:
* Onchain: onboarding / deboarding
* Channel Factory: channel topology change
* Channel: payments
This is in contrast with merge-mined Sidechains, where *all* activity requires a commitment on the mainchain:
* Onchain: onboarding / deboarding, payments
While it is true that all onboarding, deboarding, and payments are summarized in a single commitment, notice how in LN-with-channel-factories, all onboarding / deboarding is *also* summarized, but payments *have no onchain impact*, at all.
Without channel factories, LN is only:
* Onchain: onboarding / deboarding, channel topology change
* Channel: payments
So even without channel factories there is already a win, although again, due to the large numbers of channels we need, a channel factory in practice will be needed to get significantly better scaling.
Finally, in practice with Drivechains, starting a new sidechain requires implicit permission from the miners.
With LN, new channels and channel factories do not require any permission, as they are indistinguishable from ordinary transactions.
(the gossip system does leak that a particular UTXO is a particular published channel, but gossip triggers after deep confirmation, at which point it would be too late for miners to censor the channel opening.
The miners can censor channel closure for published channels, admittedly, but at least you can *start* a new channel without being censored, which you cannot do with Drivechain sidechains.)
Regards,
ZmnSCPxj
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-27 7:25 ` ZmnSCPxj
@ 2022-02-27 16:59 ` Billy Tetrud
2022-02-27 23:50 ` Paul Sztorc
0 siblings, 1 reply; 58+ messages in thread
From: Billy Tetrud @ 2022-02-27 16:59 UTC (permalink / raw)
To: ZmnSCPxj, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 5949 bytes --]
@Paul
> I think largeblock sidechains should be reconsidered:
> * They are not a blocksize increase.
This is short sighted. They would absolutely be a blocksize increase for
those following a large block sidechain. While sure, it wouldn't affect
bitcoin users who don't follow that sidechain, its misleading to call it
"not a blocksize increase" for everyone.
> * They allow users to be different. Some can pay more (for more decentralization), some less (for less decentralization).
> gambling the entire future of BTC, on the premise that strong decentralization will always be needed at all points in time.
Decentralization isn't just something where more is more valuable and
less is less valuable. Decentralization is either enough to stop a
class of attack or its not. Its pretty binary. If the decentralization
is not enough, it would be a pretty huge catastrophe for those
involved. Its pretty clear that making the blocksize eg 10 times
larger is a poor design choice. So advocating for such a thing on a
sidechain is just as bad as advocating for it on an altcoin.
Even if people only put a couple satoshis in such a sidechain at a
time, and don't feel the loss very much, the *world* would feel the
loss. Eg if everyone had $1 in such a system, and someone stole it
all, it would be a theft of billions of dollars. The fact that no
individual would feel much pain would make it not much less harmful to
society.
> We can learn from past mistakes -- when a new largeblock sidechain is needed, we can make a new one from scratch, using everything we know.
If there's some design principles that *allow* for safely increasing the
blocksize substantially like that, then I'd advocate for it in bitcoin. But
the goal of sidechains should not be "shoot from the hip and after everyone
on that sidechain gets burned we'll have learned valuable lessons". That's
not how engineering works. That's akin to wreckless human experimentation.
On Sun, Feb 27, 2022 at 1:25 AM ZmnSCPxj via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> Good morning again Paul,
>
> > With sidechains, changing the ownership set requires that the sidechain
> produce a block.
> > That block requires a 32-byte commitment in the coinbase.
> > What is more, if any transfers occur on the sidechain, they cannot be
> real without a sidechain block, that has to be committed on the mainchain.
>
> The above holds if the mainchain miners also act as sidechain validators.
> If they are somehow separate (i.e. blind merge mining), then the
> `OP_BRIBE` transaction needed is also another transaction.
> Assuming the sidechain validator is using Taproot as well, it needs the
> 32+1 txin, a 64-byte signature, a 32-byte copy of the sidechain commitment
> that the miner is being bribed to put in the coinbase, and a txout for any
> change the sidechain validator has.
>
> This is somewhat worse than the case for channel factories, even if you
> assume that every block, at least one channel factory has to do an
> onboarding event.
>
> > Thus, while changing the membership set of a channel factory is more
> expensive (it requires a pointer to the previous txout, a 64-byte Taproot
> signature, and a new Taproot address), continuous operation does not
> publish any data at all.
> > While in sidehchains, continuous operation and ordinary payments
> requires ideally one commitment of 32 bytes per mainchain block.
> > Continuous operation of the sidechain then implies a constant stream of
> 32-byte commitments, whereas continuous operation of a channel factory, in
> the absence of membership set changes, has 0 bytes per block being
> published.
> >
> > We assume that onboarding new members is much rarer than existing
> members actually paying each other in an actual economy (after the first
> burst of onboarding, new members will only arise in proportion to the birth
> rate, but typical economic transactions occur much more often), so
> optimizing for the continuous operation seems a better tradeoff.
>
> Perhaps more illustratively, with channel factories, different layers have
> different actions they can do, and the only one that needs to be broadcast
> widely are actions on the onchain layer:
>
> * Onchain: onboarding / deboarding
> * Channel Factory: channel topology change
> * Channel: payments
>
> This is in contrast with merge-mined Sidechains, where *all* activity
> requires a commitment on the mainchain:
>
> * Onchain: onboarding / deboarding, payments
>
> While it is true that all onboarding, deboarding, and payments are
> summarized in a single commitment, notice how in LN-with-channel-factories,
> all onboarding / deboarding is *also* summarized, but payments *have no
> onchain impact*, at all.
>
> Without channel factories, LN is only:
>
> * Onchain: onboarding / deboarding, channel topology change
> * Channel: payments
>
> So even without channel factories there is already a win, although again,
> due to the large numbers of channels we need, a channel factory in practice
> will be needed to get significantly better scaling.
>
>
> Finally, in practice with Drivechains, starting a new sidechain requires
> implicit permission from the miners.
> With LN, new channels and channel factories do not require any permission,
> as they are indistinguishable from ordinary transactions.
> (the gossip system does leak that a particular UTXO is a particular
> published channel, but gossip triggers after deep confirmation, at which
> point it would be too late for miners to censor the channel opening.
> The miners can censor channel closure for published channels, admittedly,
> but at least you can *start* a new channel without being censored, which
> you cannot do with Drivechain sidechains.)
>
>
> Regards,
> ZmnSCPxj
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 7453 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-27 16:59 ` Billy Tetrud
@ 2022-02-27 23:50 ` Paul Sztorc
0 siblings, 0 replies; 58+ messages in thread
From: Paul Sztorc @ 2022-02-27 23:50 UTC (permalink / raw)
To: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 6828 bytes --]
On 2/27/2022 11:59 AM, Billy Tetrud via bitcoin-dev wrote:
> @Paul
> > I think largeblocksidechainsshould be reconsidered:
> > * They are not a blocksize increase.
> This is short sighted. They would absolutely be a blocksize increase
> for those following a large block sidechain. While sure, it wouldn't
> affect bitcoin users who don't follow that sidechain, its misleading
> to call it "not a blocksize increase" for everyone.
Your larger explanation is entirely correct.
Many of the important anti-largeblock arguments are not relevant to the largeblock sidechain strategy, but some of them still are.
My concern is that people will jump to conclusions, and use the old 2015 arguments against "a blocksize increase" against this idea.
Hence my small bullet point.
> > * They allow users to be different. Some can pay more (for more decentralization), some less (for less decentralization).
> > gambling the entire future of BTC, on the premise that strong decentralization will always be needed at all points in time.
> Decentralization isn't just something where more is more valuable and less is less valuable. Decentralization is either enough to stop a class of attack or its not. Its pretty binary. If the decentralization is not enough, it would be a pretty huge catastrophe for those involved. Its pretty clear that making the blocksize eg 10 times larger is a poor design choice. So advocating for such a thing on a sidechain is just as bad as advocating for it on an altcoin.
> Even if people only put a couple satoshis in such a sidechain at a time, and don't feel the loss very much, the *world* would feel the loss. Eg if everyone had $1 in such a system, and someone stole it all, it would be a theft of billions of dollars. The fact that no individual would feel much pain would make it not much less harmful to society.
I believe that you have missed my point. Let me try to explain it in more detail.
First, imagine a magic spell is cast, which 100% prevents the "class of attack" which you mention. In that situation, all of the work that BTC does to remain decentralized, is a pure burden with absolutely no benefit whatsoever. Rational users will then become indifferent to centralization. Since decentralization has tradeoffs, users will tend to be drawn towards 'crypto' projects that have very low decentralization.
Next, imagine that the spell is lifted, and the attacks start. Users will be, of course, drawn back towards BTC, and they will appreciate it for its decentralization.
So what's the problem? Well, I believe that money has very strong network effects. Therefore, I believe that "user inertia" will be much stronger than usual. At a certain critical mass it may be insurmountable. So, at certain points along the spectrum, users will "clump up" and get "stuck".
Thus, we may "clump" on a chain that is not the most decentralized one. And an adversary can use this to their advantage. They can "grow" the centralized chain at first, to help it, and help ensure that they do not have to deal with the most decentralized chain.
This entire issue is avoided completely, if all the chains --decentralized and centralized-- and in the same monetary unit. Then, the monetary network effects never interfere, and the decentralized chain is always guaranteed to exist.
As for the phrase " Its pretty clear that making the blocksize eg 10 times larger is a poor design choice" I think this entire way of reasoning about the blocksize, is one that only applies to a non-sidechain world.
In contrast, in a world where many chains can be created, it does not matter what Some Guy thinks is "pretty clear". The only thing that matters is that people can try things out, are rewarded for successes, and are punished for mistakes.
So: if someone makes a largeblock sidechain, and the design is bad, the chain fails, and their reputation suffers.
In my way-of-reasoning, someone is actually in the wrong, if they proactively censor an experiment of any type. If a creator is willing to stand behind something, then it should be tried.
In fact, it is often best for everyone (especially the end user), if a creator keeps their ideas secret (from the "peer review" community). That way they can at least get credit/glory. The soon-to-be-successful experiments of tomorrow, should be incomprehensible to the experts of today. That's what makes them innovations.
Finally, to me it makes no difference if users have their funds stolen from a centralized Solana contract (because there is only one full node which the operator resets), or from a bip300 centralized bit-Solana sidechain (for the same reason). I don't see why the tears shed would be any different.
> > We can learn from past mistakes -- when a new largeblock sidechain is needed, we can make a new one from scratch, using everything we know.
> If there's some design principles that *allow* for safely increasing the blocksize substantially like that, then I'd advocate for it in bitcoin. But the goal of sidechains should not be "shoot from the hip and after everyone on that sidechain gets burned we'll have learned valuable lessons". That's not how engineering works. That's akin to wreckless human experimentation.
Again, we perhaps have a fundamental disagreement on this point.
In 2008 a FED chairman might have said to Satoshi, "If there were design principles that *allowed* for private, digital, bearer-instrument, payments, then of course I'd advocate for it here at the FED. But the goal of bitcoin should not be 'shoot from the hip ...'. That's not how engineering works. That's akin to wreckless human experimentation."
I think that the most dangerous experiment of all, is to adopt the 'reckless' policy of suppressing creativity.
If instead you said something like, "If a 10x blocksize chain is ever demonstrated to have property XYZ, then I will repent my error by putting my own children to death", then the audience would at least have some idea of your confidence and sincerity. But, again, a FED chairman could say exactly that, about Bitcoin. And they would still have been wrong. And even if they were right (on a fluke) they would still have been wrong to prevent the idea from being tried.
Censorship (the suppression of ideas, merely because you disagree with them) is not only immoral, on this issue it is also largely pointless. Today, a Bitcoiner can sell their BTC for Solana, or BSV, and there is nothing anyone here can do about it. Altcoin Solana vs bip300 bit-Solana, would seem to be equivalently reckless to me. So, your implicit advice (of bureaucracy-based sidechain drop/add), seems to fail to meet your own criterion (of preventing human recklessness). And it certainly does other bad things for no reason (pumps an altcoin, decreases btc fee revenues /hashrate, etc).
Paul
[-- Attachment #2: Type: text/html, Size: 8249 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-27 2:00 ` ZmnSCPxj
2022-02-27 7:25 ` ZmnSCPxj
@ 2022-02-28 0:20 ` Paul Sztorc
2022-02-28 6:49 ` ZmnSCPxj
1 sibling, 1 reply; 58+ messages in thread
From: Paul Sztorc @ 2022-02-28 0:20 UTC (permalink / raw)
To: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 2959 bytes --]
On 2/26/2022 9:00 PM, ZmnSCPxj wrote:
> ...
>> Such a technique would need to meet two requirements (or, so it seems to me):
>> #1: The layer1 UTXO (that defines the channel) can never change (ie, the 32-bytes which define the p2sh/tapscript/covenant/whatever, must stay what-they-were when the channel was opened).
>> #2: The new part-owners (who are getting coins from the rich man), will have new pubkeys which are NOT known, until AFTER the channel is opened and confirmed on the blockchain.
>>
>> Not sure how you would get both #1 and #2 at the same time. But I am not up to date on the latest LN research.
> Yes, using channel factories.
I think you may be wrong about this.
Channel factories do not meet requirement #2, as they cannot grow to onboard new users (ie, new pubkeys).
The factory-open requires that people pay to (for example), a 5-of-5 multisig. So all 5 fixed pubkeys must be known, before the factory-open is confirmed, not after.
> We assume that onboarding new members is much rarer than existing members actually paying each other
Imagine that Bitcoin could only onboard 5 new users per millennium, but once onboarded they had payment nirvana (could transact hundreds of trillions of times per second, privately, smart contracts, whatever).
Sadly, the payment nirvana would not matter. The low onboarding rate would kill the project.
The difference between the two rates [onboarding and payment], is not relevant. EACH rate must meet the design goal.
It is akin to saying: " Our car shifts from park to drive in one-millionth of a second, but it can only shift into reverse once per year; but that is OK because 'we assume that going in reverse is much rarer than driving forward' ".
> Continuous operation of the sidechain then implies a constant stream of 32-byte commitments, whereas continuous operation of a channel factory, in the absence of membership set changes, has 0 bytes per block being published.
That's true, but I think you have neglected to actually take out your calculator and run the numbers.
Hypothetically, 10 largeblock-sidechains would be 320 bytes per block (00.032%, essentially nothing).
Those 10, could onboard 33% of the planet in a single month [footnote], even if each sc-onboard required an average of 800 sc-bytes.
Certainly not a perfect idea, as the SC onboarding rate is the same as the payment rate. But once they are onboarded, those users can immediately join the LN *from* their sidechain. (All of the SC LNs would be interoperable.)
Such a strategy would take enormous pressure *off* of layer1 (relative to the "LN only" strategy). The layer1 blocksize could even **shrink** from 4 MB (wu) to 400 kb, or lower. That would cancel out the 320 bytes of overhead, many hundreds of times over.
Paul
[footnote] Envelope math, 10 sidechains, each 50 MB forever-fixed blocksize (which is a mere 12.5x our current 4M wu limit): 10 * 6*24*30 * ((50*1000*1000)/800) / 8.2 billion = .32926
[-- Attachment #2: Type: text/html, Size: 3654 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-28 0:20 ` Paul Sztorc
@ 2022-02-28 6:49 ` ZmnSCPxj
2022-02-28 7:55 ` vjudeu
2022-02-28 22:54 ` Paul Sztorc
0 siblings, 2 replies; 58+ messages in thread
From: ZmnSCPxj @ 2022-02-28 6:49 UTC (permalink / raw)
To: Paul Sztorc, Bitcoin Protocol Discussion
Good morning Paul,
> On 2/26/2022 9:00 PM, ZmnSCPxj wrote:
>
> > ...
> >
> > > Such a technique would need to meet two requirements (or, so it seems to me):
> > > #1: The layer1 UTXO (that defines the channel) can never change (ie, the 32-bytes which define the p2sh/tapscript/covenant/whatever, must stay what-they-were when the channel was opened).
> > > #2: The new part-owners (who are getting coins from the rich man), will have new pubkeys which are NOT known, until AFTER the channel is opened and confirmed on the blockchain.
> > >
> > > Not sure how you would get both #1 and #2 at the same time. But I am not up to date on the latest LN research.
> >
> > Yes, using channel factories.
>
> I think you may be wrong about this.
> Channel factories do not meet requirement #2, as they cannot grow to onboard new users (ie, new pubkeys).
> The factory-open requires that people pay to (for example), a 5-of-5 multisig. So all 5 fixed pubkeys must be known, before the factory-open is confirmed, not after.
I am not wrong about this.
You can cut-through the closure of one channel factory with the opening of another channel factory with the same 5 fixed pubkeys *plus* an additional 100 new fixed pubkeys.
With `SIGHASH_ANYPREVOUT` (which we need to Decker-Russell-Osuntokun-based channel factories) you do not even need to make new signatures for the existing channels, you just reuse the existing channel signatures and whether or not the *single*, one-input-one-output, close+reopen transaction is confirmed or not, the existing channels remain usable (the signatures can be used on both pre-reopen and post-reopen).
That is why I said changing the membership set requires onchain action.
But the onchain action is *only* a 1-input-1-output transaction, and with Taproot the signature needed is just 64 bytes witness (1 weight unit per byte), I had several paragraphs describing that, did you not read them?
Note as well that with sidechains, onboarding also requires action on the mainchain, in the form of a sideblock merge-mined on the mainchain.
>
> > We assume that onboarding new members is much rarer than existing members actually paying each other
>
> Imagine that Bitcoin could only onboard 5 new users per millennium, but once onboarded they had payment nirvana (could transact hundreds of trillions of times per second, privately, smart contracts, whatever).
> Sadly, the payment nirvana would not matter. The low onboarding rate would kill the project.
Fortunately even without channel factories the onboarding rate of LN is much much higher than that.
I mean, like, LN *is* live and *is* working, today, and (at least where I have looked, but I could be provincial) has a lot more onboarding activity than half-hearted sidechains like Liquid or Rootstock.
> The difference between the two rates [onboarding and payment], is not relevant. EACH rate must meet the design goal.
> It is akin to saying: " Our car shifts from park to drive in one-millionth of a second, but it can only shift into reverse once per year; but that is OK because 'we assume that going in reverse is much rarer than driving forward' ".
Your numbers absolutely suck and have no basis in reality, WTF.
Even without batched channel openings and a typical tranaction of 2 inputs, 1 LN channel, and a change output, you can onboard ~1250 channels per mainchain block (admittedly, without any other activity).
Let us assume every user needs 5 channels on average and that is still 250 users per 10 minutes.
I expect channel factories to increase that by about 10x to 100x more, and then you are going to hit the issue of getting people to *use* Bitcoin rather than many users wanting to get in but being unable to due to block size limits.
>
> > Continuous operation of the sidechain then implies a constant stream of 32-byte commitments, whereas continuous operation of a channel factory, in the absence of membership set changes, has 0 bytes per block being published.
>
> That's true, but I think you have neglected to actually take out your calculator and run the numbers.
>
> Hypothetically, 10 largeblock-sidechains would be 320 bytes per block (00.032%, essentially nothing).
> Those 10, could onboard 33% of the planet in a single month [footnote], even if each sc-onboard required an average of 800 sc-bytes.
>
> Certainly not a perfect idea, as the SC onboarding rate is the same as the payment rate. But once they are onboarded, those users can immediately join the LN *from* their sidechain. (All of the SC LNs would be interoperable.)
>
> Such a strategy would take enormous pressure *off* of layer1 (relative to the "LN only" strategy). The layer1 blocksize could even **shrink** from 4 MB (wu) to 400 kb, or lower. That would cancel out the 320 bytes of overhead, many hundreds of times over.
>
> Paul
>
> [footnote] Envelope math, 10 sidechains, each 50 MB forever-fixed blocksize (which is a mere 12.5x our current 4M wu limit): 10 * 6*24*30 * ((50*1000*1000)/800) / 8.2 billion = .32926
Yes, and 33% of the planet want to use Bitcoin in the next month.
The onboarding rate only needs to be as fast as the rate at which people want to join Bitcoin, and any security you sacrifice in order to get a higher number than that is security you are sacrificing needlessly for extra capacity you are unable to utilize.
As I pointed out in the other thread:
* LN:
* Funds can be stolen IF:
* There is a 51% miner, AND
* The 51% miner is a member of a channel/channel factory you are in.
* Drivechains:
* Funds can be stolen IF:
* There is a 51% miner.
Now of course there is always the possibility that the 51% miner is in *every* channel factory globally.
But there is also the possibility that the 51% miner exists, but is *not* on every channel factory.
Indeed, for any arbitrary channel or factory, I expect that the probability of the 51% miner being a member is less than 100%, thus the combined probability is lower than Drivechains.
So there is a real degradation of security in Drivechains, and if you compute the numbers, I am reasonably sure that 33% of the world is unlikely to want to use Bitcoin within one month.
I mean we already had a pandemic and everyone going online and so on, and yet Bitcoin blockchain feerates are *still* small, I had to fix a bug in CLBOSS that came up only due to hitting the minimum feerate, so no --- people are not joining Bitcoin at a rate faster than Bitcoin + LN can handle it, even with a pretty good reason to move payments online.
Worse, once 100% of the world is onboarded, the extra onboarding capacity is useless since the onboarding rate can only match the birth rate (including birth of legal persons such as corporations), which we expect is much lower than 33% increase per ***month***.
You are buying too much capacity at a real degradation in security, and I am not convinced the extra capacity is worth the loss of security.
Separating the onboarding rate from the payment rate is a *good thing*, because we can then design their structures differently.
Make onboarding slow but secure (so that their money is very secure), but make payment rate faster and less secure (because in-flight payments are likely to be much smaller than the total owned funds).
Regards,
ZmnSCPxj
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-28 6:49 ` ZmnSCPxj
@ 2022-02-28 7:55 ` vjudeu
2022-03-04 8:42 ` ZmnSCPxj
2022-02-28 22:54 ` Paul Sztorc
1 sibling, 1 reply; 58+ messages in thread
From: vjudeu @ 2022-02-28 7:55 UTC (permalink / raw)
To: ZmnSCPxj, Paul Sztorc, Bitcoin Protocol Discussion
> Continuous operation of the sidechain then implies a constant stream of 32-byte commitments, whereas continuous operation of a channel factory, in the absence of membership set changes, has 0 bytes per block being published.
The sidechain can push zero bytes on-chain, just by placing a sidechain hash in OP_RETURN inside TapScript. Then, every sidechain node can check that "this sidechain hash is connected with this Taproot address", without pushing 32 bytes on-chain.
On 2022-02-28 08:13:03 user ZmnSCPxj via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
> Good morning Paul,
> On 2/26/2022 9:00 PM, ZmnSCPxj wrote:
>
> > ...
> >
> > > Such a technique would need to meet two requirements (or, so it seems to me):
> > > #1: The layer1 UTXO (that defines the channel) can never change (ie, the 32-bytes which define the p2sh/tapscript/covenant/whatever, must stay what-they-were when the channel was opened).
> > > #2: The new part-owners (who are getting coins from the rich man), will have new pubkeys which are NOT known, until AFTER the channel is opened and confirmed on the blockchain.
> > >
> > > Not sure how you would get both #1 and #2 at the same time. But I am not up to date on the latest LN research.
> >
> > Yes, using channel factories.
>
> I think you may be wrong about this.
> Channel factories do not meet requirement #2, as they cannot grow to onboard new users (ie, new pubkeys).
> The factory-open requires that people pay to (for example), a 5-of-5 multisig. So all 5 fixed pubkeys must be known, before the factory-open is confirmed, not after.
I am not wrong about this.
You can cut-through the closure of one channel factory with the opening of another channel factory with the same 5 fixed pubkeys *plus* an additional 100 new fixed pubkeys.
With `SIGHASH_ANYPREVOUT` (which we need to Decker-Russell-Osuntokun-based channel factories) you do not even need to make new signatures for the existing channels, you just reuse the existing channel signatures and whether or not the *single*, one-input-one-output, close+reopen transaction is confirmed or not, the existing channels remain usable (the signatures can be used on both pre-reopen and post-reopen).
That is why I said changing the membership set requires onchain action.
But the onchain action is *only* a 1-input-1-output transaction, and with Taproot the signature needed is just 64 bytes witness (1 weight unit per byte), I had several paragraphs describing that, did you not read them?
Note as well that with sidechains, onboarding also requires action on the mainchain, in the form of a sideblock merge-mined on the mainchain.
>
> > We assume that onboarding new members is much rarer than existing members actually paying each other
>
> Imagine that Bitcoin could only onboard 5 new users per millennium, but once onboarded they had payment nirvana (could transact hundreds of trillions of times per second, privately, smart contracts, whatever).
> Sadly, the payment nirvana would not matter. The low onboarding rate would kill the project.
Fortunately even without channel factories the onboarding rate of LN is much much higher than that.
I mean, like, LN *is* live and *is* working, today, and (at least where I have looked, but I could be provincial) has a lot more onboarding activity than half-hearted sidechains like Liquid or Rootstock.
> The difference between the two rates [onboarding and payment], is not relevant. EACH rate must meet the design goal.
> It is akin to saying: " Our car shifts from park to drive in one-millionth of a second, but it can only shift into reverse once per year; but that is OK because 'we assume that going in reverse is much rarer than driving forward' ".
Your numbers absolutely suck and have no basis in reality, WTF.
Even without batched channel openings and a typical tranaction of 2 inputs, 1 LN channel, and a change output, you can onboard ~1250 channels per mainchain block (admittedly, without any other activity).
Let us assume every user needs 5 channels on average and that is still 250 users per 10 minutes.
I expect channel factories to increase that by about 10x to 100x more, and then you are going to hit the issue of getting people to *use* Bitcoin rather than many users wanting to get in but being unable to due to block size limits.
>
> > Continuous operation of the sidechain then implies a constant stream of 32-byte commitments, whereas continuous operation of a channel factory, in the absence of membership set changes, has 0 bytes per block being published.
>
> That's true, but I think you have neglected to actually take out your calculator and run the numbers.
>
> Hypothetically, 10 largeblock-sidechains would be 320 bytes per block (00.032%, essentially nothing).
> Those 10, could onboard 33% of the planet in a single month [footnote], even if each sc-onboard required an average of 800 sc-bytes.
>
> Certainly not a perfect idea, as the SC onboarding rate is the same as the payment rate. But once they are onboarded, those users can immediately join the LN *from* their sidechain. (All of the SC LNs would be interoperable.)
>
> Such a strategy would take enormous pressure *off* of layer1 (relative to the "LN only" strategy). The layer1 blocksize could even **shrink** from 4 MB (wu) to 400 kb, or lower. That would cancel out the 320 bytes of overhead, many hundreds of times over.
>
> Paul
>
> [footnote] Envelope math, 10 sidechains, each 50 MB forever-fixed blocksize (which is a mere 12.5x our current 4M wu limit): 10 * 6*24*30 * ((50*1000*1000)/800) / 8.2 billion = .32926
Yes, and 33% of the planet want to use Bitcoin in the next month.
The onboarding rate only needs to be as fast as the rate at which people want to join Bitcoin, and any security you sacrifice in order to get a higher number than that is security you are sacrificing needlessly for extra capacity you are unable to utilize.
As I pointed out in the other thread:
* LN:
* Funds can be stolen IF:
* There is a 51% miner, AND
* The 51% miner is a member of a channel/channel factory you are in.
* Drivechains:
* Funds can be stolen IF:
* There is a 51% miner.
Now of course there is always the possibility that the 51% miner is in *every* channel factory globally.
But there is also the possibility that the 51% miner exists, but is *not* on every channel factory.
Indeed, for any arbitrary channel or factory, I expect that the probability of the 51% miner being a member is less than 100%, thus the combined probability is lower than Drivechains.
So there is a real degradation of security in Drivechains, and if you compute the numbers, I am reasonably sure that 33% of the world is unlikely to want to use Bitcoin within one month.
I mean we already had a pandemic and everyone going online and so on, and yet Bitcoin blockchain feerates are *still* small, I had to fix a bug in CLBOSS that came up only due to hitting the minimum feerate, so no --- people are not joining Bitcoin at a rate faster than Bitcoin + LN can handle it, even with a pretty good reason to move payments online.
Worse, once 100% of the world is onboarded, the extra onboarding capacity is useless since the onboarding rate can only match the birth rate (including birth of legal persons such as corporations), which we expect is much lower than 33% increase per ***month***.
You are buying too much capacity at a real degradation in security, and I am not convinced the extra capacity is worth the loss of security.
Separating the onboarding rate from the payment rate is a *good thing*, because we can then design their structures differently.
Make onboarding slow but secure (so that their money is very secure), but make payment rate faster and less secure (because in-flight payments are likely to be much smaller than the total owned funds).
Regards,
ZmnSCPxj
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-28 6:49 ` ZmnSCPxj
2022-02-28 7:55 ` vjudeu
@ 2022-02-28 22:54 ` Paul Sztorc
2022-03-01 5:39 ` Billy Tetrud
1 sibling, 1 reply; 58+ messages in thread
From: Paul Sztorc @ 2022-02-28 22:54 UTC (permalink / raw)
To: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 5464 bytes --]
On 2/28/2022 1:49 AM, ZmnSCPxj wrote:
> ...
>>>> ...
>>>>
>>>> Perhaps, someone will invent a way, to LN-onboard WITHOUT needing new layer1 bytes.
>>>>
>>>> If so, a "rich man" could open a LN channel, and gradually transfer it to new people.
>>>>
>>>> Such a technique would need to meet two requirements (or, so it seems to me):
>>>> #1: The layer1 UTXO (that defines the channel) can never change (ie, the 32-bytes which define the p2sh/tapscript/covenant/whatever, must stay what-they-were when the channel was opened).
>>>> #2: The new part-owners (who are getting coins from the rich man), will have new pubkeys which are NOT known, until AFTER the channel is opened and confirmed on the blockchain.
>>>>
>>>> Not sure how you would get both #1 and #2 at the same time. But I am not up to date on the latest LN research.
>>> Yes, using channel factories.
>> I think you may be wrong about this.
>> ...
> I am not wrong about this.
Well, let's take a closer look then.
The topic was: "a way, to LN-onboard [a new pubkey] WITHOUT needing new layer1 bytes".
By which I meant, that I could generate a new pubkey right now, and add it to the LN, without any onchain action.
I can shorten and restate the two requirements (and reorder them) as:
#2: Can later add a new public key to the membership set.
#1: Without an onchain action.
And yet you yourself say, very clearly:
> ... That is why I said changing the membership set requires onchain action.
Which would seem to directly contradict what you say about channel factories.
Unless you can show me how to add my new pubkey_4, to a 3-of-3 channel factory opened last year. Without using an onchain action.
You seem to want to instead change the subject. (To something like: 'we can do better the rate (32 bytes per 5 onboards), from your footnote'.)
Which is fine. But it is not what I bought up.
***
In general, you seem to have a future in mind, where new users onboard via factory.
For example, 50,000 new users want to onboard in the next block. These strangers, spontaneously organize into 1000 factories of 55 people each, (50 newbies with zero coins + 5 wealthier BTCs who have lots of coins). They then broadcast into the block and join Bitcoin.
And this one factory provides them with many channels, so it can meet most/all of their needs.
I am not here to critique factories. I was simply observing that your logic "sidechains don't scale, because you have to share your messages" is not quite airtight, because in the case of onboarding the situation is reversed and so supports the exact opposite conclusion.
I believe I have made my point by now. It should be easy for people to see what each of us has in mind, and the strengths and weaknesses.
I am curious about something, though. Maybe you can help me.
Presumably there are risks to large factories. Perhaps an attacker could join each new factory with just $1 of BTC, spend this $1, and then refuse to cooperate with the factory any further. Thus they can disable the factory at a cost of $1 rented dollar.
If 1000 factories are opened per block, this would be 52.5 M factories per year, $52.5 million USD per year to disable all the factories out of spite. (All of which they would eventually get back.) I can think of a few people who might try it.
> I mean, like, LN ... has a lot more onboarding activity than half-hearted sidechains like Liquid or Rootstock.
I don't see the relevance of this. We are talking about the future (theoretical), not the past (empirical).
For example, someone could say "Ethereum has a lot more onboarding activity than LN ..." but this would also make no difference to anything.
> ...The onboarding rate only needs to be as fast as the rate at which people want to join Bitcoin.
> ...
>
> As I pointed out in the other thread:
>
> * LN:
> * Funds can be stolen IF:
> * There is a 51% miner, AND
> * The 51% miner is a member of a channel/channel factory you are in.
> * Drivechains:
> * Funds can be stolen IF:
> * There is a 51% miner.
> ...
> So there is a real degradation of security in Drivechains, and if you compute the numbers, I am reasonably sure that 33% of the world is unlikely to want to use Bitcoin within one month.
> I mean we already had a pandemic and everyone going online and so on, and yet Bitcoin blockchain feerates are *still* small, I had to fix a bug in CLBOSS that came up only due to hitting the minimum feerate, so no --- people are not joining Bitcoin at a rate faster than Bitcoin + LN can handle it, even with a pretty good reason to move payments online.
>
> Worse, once 100% of the world is onboarded, the extra onboarding capacity is useless since the onboarding rate can only match the birth rate (including birth of legal persons such as corporations), which we expect is much lower than 33% increase per ***month***.
>
> You are buying too much capacity at a real degradation in security, and I am not convinced the extra capacity is worth the loss of security.
>
> Separating the onboarding rate from the payment rate is a *good thing*, because we can then design their structures differently.
> Make onboarding slow but secure (so that their money is very secure), but make payment rate faster and less secure (because in-flight payments are likely to be much smaller than the total owned funds).
Obviously I don't agree with any of these sentences (most are irrelevant, some false). But I would only be repeating myself.
Paul
[-- Attachment #2: Type: text/html, Size: 6590 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-28 22:54 ` Paul Sztorc
@ 2022-03-01 5:39 ` Billy Tetrud
2022-03-02 0:00 ` Paul Sztorc
0 siblings, 1 reply; 58+ messages in thread
From: Billy Tetrud @ 2022-03-01 5:39 UTC (permalink / raw)
To: Paul Sztorc, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 7969 bytes --]
@Paul
> I believe that money has very strong network effects. ... users will
"clump up" and get "stuck".
I'm of the same opinion.
> This entire issue is avoided completely, if all the chains
--decentralized and centralized-- and in the same monetary unit. Then, the
monetary network effects never interfere, and the decentralized chain is
always guaranteed to exist.
It sounds like what you're saying is that without side chains, everyone
might switch entirely to some altcoin and bitcoin will basically die. And
at that point, the insecurity of that coin people switched to can be
heavily exploited by some attacker(s). Is that right? Its an interesting
thought experiment. However, it leads me to wonder: if a sidechain gets so
popular that it dominates the main chain, why would people keep that main
chain around at all? A sidechain could eject the main chain and all its
baggage if it got so big. So I don't think it can really be said that the
problem can be avoided "completely". But in any case, I see your line of
thinking.
> someone is actually in the wrong, if they proactively censor an
experiment of any type. If a creator is willing to stand behind something,
then it should be tried.
> it makes no difference if users have their funds stolen from a
centralized Solana contract or from a bip300 centralized bit-Solana
sidechain. I don't see why the tears shed would be any different.
I agree with you. My point was not that we should stop anyone from doing
this. My point was only that we shouldn't advocate for ideas we think
aren't good. You were advocating for a "largeblock sidechain", and unless
you have good reasons to think that is an idea likely to succeed and want
to share them with us, then you shouldn't be advocating for that. But
certainly if someone *does* think so and has their own reasons, I wouldn't
want to censor or stop them. But I wouldn't advocate for them to do it
unless their ideas were convincing to me, because I know enough to know the
dangers of large block blockchains.
On Mon, Feb 28, 2022 at 4:55 PM Paul Sztorc via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> On 2/28/2022 1:49 AM, ZmnSCPxj wrote:
>
> ...
>
> ...
>
> Perhaps, someone will invent a way, to LN-onboard WITHOUT needing new layer1 bytes.
>
> If so, a "rich man" could open a LN channel, and gradually transfer it to new people.
>
> Such a technique would need to meet two requirements (or, so it seems to me):
> #1: The layer1 UTXO (that defines the channel) can never change (ie, the 32-bytes which define the p2sh/tapscript/covenant/whatever, must stay what-they-were when the channel was opened).
> #2: The new part-owners (who are getting coins from the rich man), will have new pubkeys which are NOT known, until AFTER the channel is opened and confirmed on the blockchain.
>
> Not sure how you would get both #1 and #2 at the same time. But I am not up to date on the latest LN research.
>
> Yes, using channel factories.
>
> I think you may be wrong about this.
> ...
>
> I am not wrong about this.
>
> Well, let's take a closer look then.
>
> The topic was: "a way, to LN-onboard [a new pubkey] WITHOUT needing new layer1 bytes".
>
> By which I meant, that I could generate a new pubkey right now, and add it to the LN, without any onchain action.
>
> I can shorten and restate the two requirements (and reorder them) as:
> #2: Can later add a new public key to the membership set.
> #1: Without an onchain action.
>
> And yet you yourself say, very clearly:
>
>
> ... That is why I said changing the membership set requires onchain action.
>
> Which would seem to directly contradict what you say about channel
> factories. Unless you can show me how to add my new pubkey_4, to a 3-of-3
> channel factory opened last year. Without using an onchain action. You seem
> to want to instead change the subject. (To something like: 'we can do
> better the rate (32 bytes per 5 onboards), from your footnote'.) Which is
> fine. But it is not what I bought up.
>
> ***
>
> In general, you seem to have a future in mind, where new users onboard via factory.
> For example, 50,000 new users want to onboard in the next block. These strangers, spontaneously organize into 1000 factories of 55 people each, (50 newbies with zero coins + 5 wealthier BTCs who have lots of coins). They then broadcast into the block and join Bitcoin.
> And this one factory provides them with many channels, so it can meet most/all of their needs.
>
> I am not here to critique factories. I was simply observing that your logic "sidechains don't scale, because you have to share your messages" is not quite airtight, because in the case of onboarding the situation is reversed and so supports the exact opposite conclusion.
> I believe I have made my point by now. It should be easy for people to see what each of us has in mind, and the strengths and weaknesses.
>
> I am curious about something, though. Maybe you can help me.
> Presumably there are risks to large factories. Perhaps an attacker could join each new factory with just $1 of BTC, spend this $1, and then refuse to cooperate with the factory any further. Thus they can disable the factory at a cost of $1 rented dollar.
> If 1000 factories are opened per block, this would be 52.5 M factories per year, $52.5 million USD per year to disable all the factories out of spite. (All of which they would eventually get back.) I can think of a few people who might try it.
>
>
> I mean, like, LN ... has a lot more onboarding activity than half-hearted sidechains like Liquid or Rootstock.
>
> I don't see the relevance of this. We are talking about the future
> (theoretical), not the past (empirical). For example, someone could say
> "Ethereum has a lot more onboarding activity than LN ..." but this would
> also make no difference to anything.
>
> ...The onboarding rate only needs to be as fast as the rate at which people want to join Bitcoin.
> ...
>
> As I pointed out in the other thread:
>
> * LN:
> * Funds can be stolen IF:
> * There is a 51% miner, AND
> * The 51% miner is a member of a channel/channel factory you are in.
> * Drivechains:
> * Funds can be stolen IF:
> * There is a 51% miner.
> ...
> So there is a real degradation of security in Drivechains, and if you compute the numbers, I am reasonably sure that 33% of the world is unlikely to want to use Bitcoin within one month.
> I mean we already had a pandemic and everyone going online and so on, and yet Bitcoin blockchain feerates are *still* small, I had to fix a bug in CLBOSS that came up only due to hitting the minimum feerate, so no --- people are not joining Bitcoin at a rate faster than Bitcoin + LN can handle it, even with a pretty good reason to move payments online.
>
> Worse, once 100% of the world is onboarded, the extra onboarding capacity is useless since the onboarding rate can only match the birth rate (including birth of legal persons such as corporations), which we expect is much lower than 33% increase per ***month***.
>
> You are buying too much capacity at a real degradation in security, and I am not convinced the extra capacity is worth the loss of security.
>
> Separating the onboarding rate from the payment rate is a *good thing*, because we can then design their structures differently.
> Make onboarding slow but secure (so that their money is very secure), but make payment rate faster and less secure (because in-flight payments are likely to be much smaller than the total owned funds).
>
> Obviously I don't agree with any of these sentences (most are irrelevant, some false). But I would only be repeating myself.
>
> Paul
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 9966 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-03-01 5:39 ` Billy Tetrud
@ 2022-03-02 0:00 ` Paul Sztorc
2022-03-04 12:35 ` Billy Tetrud
0 siblings, 1 reply; 58+ messages in thread
From: Paul Sztorc @ 2022-03-02 0:00 UTC (permalink / raw)
To: Billy Tetrud, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 5128 bytes --]
On 3/1/2022 12:39 AM, Billy Tetrud wrote:
>> This entire issue is avoided completely, if all the chains
>> --decentralized and centralized-- and in the same monetary unit.
>> Then, the monetary network effects never interfere, and the
>> decentralized chain is always guaranteed to exist.
> It sounds like what you're saying is that without side chains,
> everyone might switch entirely to some altcoin and bitcoin will
> basically die. And at that point, the insecurity of that coin people
> switched to can be heavily exploited by some attacker(s). Is that right?
Yes, precisely.
> Its an interesting thought experiment. However, it leads me to wonder:
> if a sidechain gets so popular that it dominates the main chain, why
> would people keep that main chain around at all?
For some reason, this is a very popular question. I suppose if you believe in "one size fits all" chain philosophy (see comment below), it makes sense to say "these sidechains are terrible" on Monday and then "these sidechains are so good they will replace the mainchain" on Tuesday.
In any event, sidechains cannot exist without their mainchain (as I see it). For example, imagine that you are on a zcash sidechain, and someone claims they deposited 1000 BTC, from Bitcoin Core into this sidechain? Do you give them 1000 z-BTC, or not? Without the mainchain,
you can't tell.
If you run the Bip300 DriveNet demo software (drivechain.info/releases), you will see for yourself: the test-sidechains are absolutely inert, UNTIL they have rpc access to the mainchain. (Exactly the same way that a LN node needs a Bitcoin Core node.)
> > someone is actually in the wrong, if they proactively censor an
> experiment of any type. If a creator is willing to stand behind
> something, then it should be tried.
> > it makes no difference if users have their funds stolen from a
> centralized Solana contract or from a bip300 centralized bit-Solana
> sidechain. I don't see why the tears shed would be any different.
> I agree with you. My point was not that we should stop anyone from
> doing this. My point was only that we shouldn't advocate for ideas we
> think aren't good. You were advocating for a "largeblock sidechain",
> and unless you have good reasons to think that is an idea likely to
> succeed and want to share them with us, then you shouldn't be
> advocating for that. But certainly if someone *does* think so and has
> their own reasons, I wouldn't want to censor or stop them. But I
> wouldn't advocate for them to do it unless their ideas were convincing
> to me, because I know enough to know the dangers of large block
> blockchains.
Yes, I strongly agree, that we should only advocate for ideas we believe in.
I do not believe in naive layer1 largeblockerism. But I do believe in sidechain largeblockism.
Something funny once happened to me when I was on a Bitcoin conference panel*. There were three people: myself, a Blockstream person, and an (ex)BitPay person. The first two of us, were valiantly defending the small block position. I gave my usual speech: that node costs must remain low, so that people can run full nodes. The largeblocker mentioned that they ran many nodes (including BCH nodes etc) and didn't mind the cost, so I disclosed --in a good-natured way-- that I do not even run a BTC full node myself (out of choice). Thus, I was yammering about software I wasn't even running, I had no skin in the game! Lo and behold -- my Blockstream smallblocker ally-on-the-panel, immediately admitted to everyone that he did not run a full node either. The only node-runner was the largeblocker. The audience found this very amusing (as did I).
We smallblockers, justified our sinful nodeless behavior, as follows (paraphrasing): we receive BTC mainly from people that we know (and have a long-term relationship with); our receipts are not time sensitive; we are not paid in BTC that often; if payments turned out to be forged we would have enormous recourse against our counterparties; etc.
We did not run full nodes, because we did not need to draw on the blockchain's powers, **for those transactions**.
Which is my point: people are different, and transactions are different. I make many transactions today, with VISA or Venmo. These are not censorship-resistant, but somehow I survive the month, without bursting into flames.
Wouldn't life be better, if we Bitcoiners could easily sweep those fiat transactions into *some* part of the BTC universe? (For example, a family of largeblock sidechains). To me the answer is clearly yes.
Unlike layer1-largeblockism, no one running Bitcoin Core ever needs to see these 'btc' transactions (the same as we don't see them today, on account of them not existing at all); they do not burden Bitcoin Core full nodes. Hence why it seems like a good idea to me.
An SPV-wallet-of-a-largeblock-sidechain, is of course, a *disgrace* compared to a full-node-of-smallblock-mainchain-Bitcoin-Core. But, it is emphatically superior to Venmo / VISA or even "custodial LN". And certainly superior to nothing.
Paul
*https://www.youtube.com/watch?v=V3cvH2eWqfU
[-- Attachment #2: Type: text/html, Size: 6555 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-02-28 7:55 ` vjudeu
@ 2022-03-04 8:42 ` ZmnSCPxj
2022-03-04 13:43 ` vjudeu
0 siblings, 1 reply; 58+ messages in thread
From: ZmnSCPxj @ 2022-03-04 8:42 UTC (permalink / raw)
To: vjudeu; +Cc: Bitcoin Protocol Discussion
Good morning vjudeu,
> > Continuous operation of the sidechain then implies a constant stream of 32-byte commitments, whereas continuous operation of a channel factory, in the absence of membership set changes, has 0 bytes per block being published.
>
> The sidechain can push zero bytes on-chain, just by placing a sidechain hash in OP_RETURN inside TapScript. Then, every sidechain node can check that "this sidechain hash is connected with this Taproot address", without pushing 32 bytes on-chain.
The Taproot address itself has to take up 32 bytes onchain, so this saves nothing.
Regards,
ZmnSCPxj
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-03-02 0:00 ` Paul Sztorc
@ 2022-03-04 12:35 ` Billy Tetrud
2022-03-04 20:06 ` Paul Sztorc
0 siblings, 1 reply; 58+ messages in thread
From: Billy Tetrud @ 2022-03-04 12:35 UTC (permalink / raw)
To: Paul Sztorc; +Cc: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 7152 bytes --]
> "these sidechains are terrible" on Monday and then "these sidechains are
so good they will replace the mainchain" on Tuesday
Your premise is that a sidechain might come to dominate bitcoin, and that
this would be better than an altcoin dominating bitcoin. Did I
misunderstand you? Not quite sure why you're balking at me simply
confirming your premise.
> sidechains cannot exist without their mainchain .. imagine .. a zcash
sidechain, and someone claims they deposited 1000 BTC
A sidechain could stop supporting deposits from or withdrawals to bitcoin
and completely break any relationship with the main chain. I agree this is
not as sure of a thing as starting with an altcoin (which of course never
has that kind of relationship with bitcoin). So I do think there are some
merits to sidechains in your scenario. However, I don't think its quite
accurate to say it completely solves the problem (of a less-secure altcoin
becoming dominant).
Your anecdote about not running a full node is amusing, and I've often
found myself in that position. I certainly agree different people are
different and so different trade offs can be better for different
people. However,
the question is: what tradeoffs does a largeblock sidechain do better than
both eg Visa and lightning?
>Wouldn't life be better, if we Bitcoiners could easily sweep those fiat transactions into *some* part of the BTC universe? (For example, a family of largeblock sidechains). To me the answer is clearly yes.
I guess its not as clear to me. We agree it wouldn't significantly burden
Bitcoin-only nodes, but not being a burden is not a sufficient reason to do
something, only reason to not prevent it. But what are the benefits to a
user of that chain? Slightly lower fees than main bitcoin? More
decentralization than Visa or Venmo? Doesn't lightning already do better on
both accounts?
On Tue, Mar 1, 2022 at 6:00 PM Paul Sztorc <truthcoin@gmail.com> wrote:
> On 3/1/2022 12:39 AM, Billy Tetrud wrote:
>
> This entire issue is avoided completely, if all the chains --decentralized and centralized-- and in the same monetary unit. Then, the monetary network effects never interfere, and the decentralized chain is always guaranteed to exist.
>
> It sounds like what you're saying is that without side chains, everyone might switch entirely to some altcoin and bitcoin will basically die. And at that point, the insecurity of that coin people switched to can be heavily exploited by some attacker(s). Is that right?
>
> Yes, precisely.
>
> Its an interesting thought experiment. However, it leads me to wonder: if a sidechain gets so popular that it dominates the main chain, why would people keep that main chain around at all?
>
> For some reason, this is a very popular question. I suppose if you believe in "one size fits all" chain philosophy (see comment below), it makes sense to say "these sidechains are terrible" on Monday and then "these sidechains are so good they will replace the mainchain" on Tuesday.
>
> In any event, sidechains cannot exist without their mainchain (as I see it). For example, imagine that you are on a zcash sidechain, and someone claims they deposited 1000 BTC, from Bitcoin Core into this sidechain? Do you give them 1000 z-BTC, or not? Without the mainchain,
> you can't tell.
>
> If you run the Bip300 DriveNet demo software (drivechain.info/releases), you will see for yourself: the test-sidechains are absolutely inert, UNTIL they have rpc access to the mainchain. (Exactly the same way that a LN node needs a Bitcoin Core node.)
>
>
>
> > someone is actually in the wrong, if they proactively censor an experiment of any type. If a creator is willing to stand behind something, then it should be tried.
>
> > it makes no difference if users have their funds stolen from a centralized Solana contract or from a bip300 centralized bit-Solana sidechain. I don't see why the tears shed would be any different.
>
> I agree with you. My point was not that we should stop anyone from doing this. My point was only that we shouldn't advocate for ideas we think aren't good. You were advocating for a "largeblock sidechain", and unless you have good reasons to think that is an idea likely to succeed and want to share them with us, then you shouldn't be advocating for that. But certainly if someone *does* think so and has their own reasons, I wouldn't want to censor or stop them. But I wouldn't advocate for them to do it unless their ideas were convincing to me, because I know enough to know the dangers of large block blockchains.
>
> Yes, I strongly agree, that we should only advocate for ideas we believe in.
>
> I do not believe in naive layer1 largeblockerism. But I do believe in sidechain largeblockism.
>
> Something funny once happened to me when I was on a Bitcoin conference panel*. There were three people: myself, a Blockstream person, and an (ex)BitPay person. The first two of us, were valiantly defending the small block position. I gave my usual speech: that node costs must remain low, so that people can run full nodes. The largeblocker mentioned that they ran many nodes (including BCH nodes etc) and didn't mind the cost, so I disclosed --in a good-natured way-- that I do not even run a BTC full node myself (out of choice). Thus, I was yammering about software I wasn't even running, I had no skin in the game! Lo and behold -- my Blockstream smallblocker ally-on-the-panel, immediately admitted to everyone that he did not run a full node either. The only node-runner was the largeblocker. The audience found this very amusing (as did I).
>
> We smallblockers, justified our sinful nodeless behavior, as follows (paraphrasing): we receive BTC mainly from people that we know (and have a long-term relationship with); our receipts are not time sensitive; we are not paid in BTC that often; if payments turned out to be forged we would have enormous recourse against our counterparties; etc.
>
> We did not run full nodes, because we did not need to draw on the blockchain's powers, **for those transactions**.
>
> Which is my point: people are different, and transactions are different. I make many transactions today, with VISA or Venmo. These are not censorship-resistant, but somehow I survive the month, without bursting into flames.
>
> Wouldn't life be better, if we Bitcoiners could easily sweep those fiat transactions into *some* part of the BTC universe? (For example, a family of largeblock sidechains). To me the answer is clearly yes.
>
> Unlike layer1-largeblockism, no one running Bitcoin Core ever needs to see these 'btc' transactions (the same as we don't see them today, on account of them not existing at all); they do not burden Bitcoin Core full nodes. Hence why it seems like a good idea to me.
>
> An SPV-wallet-of-a-largeblock-sidechain, is of course, a *disgrace* compared to a full-node-of-smallblock-mainchain-Bitcoin-Core. But, it is emphatically superior to Venmo / VISA or even "custodial LN". And certainly superior to nothing.
>
> Paul
>
> * https://www.youtube.com/watch?v=V3cvH2eWqfU
>
>
[-- Attachment #2: Type: text/html, Size: 9495 bytes --]
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-03-04 8:42 ` ZmnSCPxj
@ 2022-03-04 13:43 ` vjudeu
0 siblings, 0 replies; 58+ messages in thread
From: vjudeu @ 2022-03-04 13:43 UTC (permalink / raw)
To: ZmnSCPxj; +Cc: Bitcoin Protocol Discussion
> The Taproot address itself has to take up 32 bytes onchain, so this saves nothing.
There is always at least one address, because you have a coinbase transaction and a solo miner or mining pool that is getting the whole reward. So, instead of using separate OP_RETURN's for each sidechain, for each federation, and for every "commitment to the blockchain", all we need is just tweaking that miner's key and placing everything inside unused TapScript. Then, we don't need separate 32 bytes for this and separate 32 bytes for that, we only need a commitment and a MAST-based path that can link such commitment to the address of this miner.
So, instead of having:
<coinbasePubkey>
<opReturn1>
<opReturn2>
...
<opReturnN>
We could have:
<tweakedCoinbasePubkey>
On 2022-03-04 09:42:23 user ZmnSCPxj <ZmnSCPxj@protonmail.com> wrote:
> Good morning vjudeu,
> > Continuous operation of the sidechain then implies a constant stream of 32-byte commitments, whereas continuous operation of a channel factory, in the absence of membership set changes, has 0 bytes per block being published.
>
> The sidechain can push zero bytes on-chain, just by placing a sidechain hash in OP_RETURN inside TapScript. Then, every sidechain node can check that "this sidechain hash is connected with this Taproot address", without pushing 32 bytes on-chain.
The Taproot address itself has to take up 32 bytes onchain, so this saves nothing.
Regards,
ZmnSCPxj
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
2022-03-04 12:35 ` Billy Tetrud
@ 2022-03-04 20:06 ` Paul Sztorc
0 siblings, 0 replies; 58+ messages in thread
From: Paul Sztorc @ 2022-03-04 20:06 UTC (permalink / raw)
To: Billy Tetrud; +Cc: Bitcoin Protocol Discussion
On 3/4/2022 7:35 AM, Billy Tetrud wrote:
>> sidechains cannot exist without their mainchain ...
>
> A sidechain could stop supporting deposits from or withdrawals to
> bitcoin and completely break any relationship with the main chain.
> I agree this is not as sure of a thing as starting with an altcoin
> (which of course never has that kind of relationship with bitcoin).
> So I do think there are some merits to sidechains in your scenario.
> However, I don't think its quite accurate to say it completely
> solves the problem (of a less-secure altcoin becoming dominant).
It is hard to see how this "sidechain cuts off the mainchain" scenario
could plausibly be in enough people's interest:
* Miners would lose the block subsidy (ie, the 6.25 BTC, or whatever of
it that still remains), and txn fees from the mainchain and all other
merged mined chains.
* Developers would lose the ability to create a dissenting new piece of
software (and would instead be forced into a permanent USSR-style "one
party system" intellectual monoculture).
* Users would lose --permanently-- the ability to take their coins to
new blockchains, removing almost all of their leverage.
Furthermore, because sidechains cannot exist without their parent (but
not vice-versa), we can expect a large permanent interest in keeping
mainchain node costs low. Aka: very small mainchain blocks forever. So,
the shut-it-down mainchain-haters, would have to meet the question "why
not just leave things the way they are?". And the cheaper the
mainchain-nodes are, the harder that question is to answer.
However, if a sidechain really were so overwhelmingly popular as to
clear all of these hurdles, then I would first want to understand why it
is so popular. Maybe it is a good thing and we should cheer it on.
> Your anecdote about not running a full node is amusing, and I've often
> found myself in that position. I certainly agree different people are
> different and so different trade offs can be better for different
> people. However, the question is: what tradeoffs does a largeblock
> sidechain do better than both eg Visa and lightning?
Yes, that's true. There are very many tradeoffs in general:
1. Onboarding
2. Route Capacity / Payment Limits
3. Failed Payments
4. Speed of Payment
5. Receive while offline / need for interaction/monitoring/watchtowers
6. Micropayments
7. Types of fees charged, and for what
8. Contribution to layer1 security budget
9. Auditability (re: large organizations) / general complexity
LN is certainly better for 4 and 6. But everything else is probably up
for grabs. And this is not intended to be an exhaustive list. I just
made it up now.
(And, if the layer2 is harmless, then its existence can be justified via
one single net benefit, for some users, somewhere on the tradeoff-list.)
Paul
^ permalink raw reply [flat|nested] 58+ messages in thread
end of thread, other threads:[~2022-03-04 20:06 UTC | newest]
Thread overview: 58+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-26 17:20 [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT Russell O'Connor
2022-01-26 22:16 ` Jeremy
2022-01-27 4:20 ` James Lu
2022-01-27 19:16 ` Russell O'Connor
2022-01-28 0:18 ` James O'Beirne
2022-01-28 13:14 ` Michael Folkson
2022-01-28 14:17 ` Anthony Towns
2022-01-28 16:38 ` Jeremy
2022-01-28 14:13 ` Russell O'Connor
2022-01-28 15:14 ` James O'Beirne
2022-01-29 15:43 ` Russell O'Connor
2022-01-29 17:02 ` Jeremy Rubin
[not found] ` <CAD5xwhjHv2EGYb33p2MRS=VSz=ciGwAsiafX1yRHjxQEXfykSA@mail.gmail.com>
2022-01-29 17:14 ` Russell O'Connor
2022-01-31 2:18 ` Anthony Towns
2022-01-28 1:34 ` Anthony Towns
2022-01-28 13:56 ` Russell O'Connor
2022-02-01 1:16 ` Anthony Towns
2022-02-08 2:16 ` Russell O'Connor
2022-02-17 14:27 ` Anthony Towns
2022-02-17 14:50 ` Russell O'Connor
2022-02-08 3:40 ` Rusty Russell
2022-02-08 4:34 ` Jeremy Rubin
2022-02-11 0:55 ` [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was " David A. Harding
2022-02-11 3:42 ` Jeremy Rubin
2022-02-11 17:42 ` James O'Beirne
2022-02-11 18:12 ` digital vagabond
2022-02-12 10:54 ` darosior
2022-02-12 15:59 ` Billy Tetrud
2022-02-17 15:15 ` Anthony Towns
2022-02-18 7:34 ` ZmnSCPxj
2022-02-23 11:28 ` ZmnSCPxj
2022-02-23 18:14 ` Paul Sztorc
2022-02-24 2:20 ` ZmnSCPxj
2022-02-24 6:53 ` Anthony Towns
2022-02-24 12:03 ` ZmnSCPxj
2022-02-26 5:38 ` Billy Tetrud
2022-02-26 6:43 ` ZmnSCPxj
2022-02-27 0:58 ` Paul Sztorc
2022-02-27 2:00 ` ZmnSCPxj
2022-02-27 7:25 ` ZmnSCPxj
2022-02-27 16:59 ` Billy Tetrud
2022-02-27 23:50 ` Paul Sztorc
2022-02-28 0:20 ` Paul Sztorc
2022-02-28 6:49 ` ZmnSCPxj
2022-02-28 7:55 ` vjudeu
2022-03-04 8:42 ` ZmnSCPxj
2022-03-04 13:43 ` vjudeu
2022-02-28 22:54 ` Paul Sztorc
2022-03-01 5:39 ` Billy Tetrud
2022-03-02 0:00 ` Paul Sztorc
2022-03-04 12:35 ` Billy Tetrud
2022-03-04 20:06 ` Paul Sztorc
2022-02-26 6:00 ` Anthony Towns
2022-02-15 8:45 ` [bitcoin-dev] " Rusty Russell
2022-02-15 18:57 ` Jeremy Rubin
2022-02-15 19:12 ` Russell O'Connor
2022-02-16 2:26 ` Rusty Russell
2022-02-16 4:10 ` Russell O'Connor
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox