> This is *literally* what the post you are replying to is proposing to solve.
I
thought the changes mentioned in the OP (+ committing to the spent
input index) only solves the half-spend problem, but not the stable
txids one?
There
can be other inputs with a scriptSig, which doesn't get committed to in
the APO hash. I guess this isn't too common, but there might be some cases
where you would want to spend some (pre-selected) non-segwit inputs
alongside your covenant, maybe for fees. With CTV you would pre-commit
to the scriptSig which makes it non-malleable even if the script itself
is.
> Hmm? You can't have channel factories without Eltoo. (Well, you can in theory but good luck.)
> Maybe you are refering to non-interactive channel creation?
I
was referring to what BIP 119 calls 'Batched Channel Creation' [0],
which is a sort of a channel factory construction under a broader
definition (and in fact was previously called that in the BIP [1]).
> The case for stable txids is less strong if we have APO (and therefore Eltoo).
There's merit in using these factory constructs for Poon-Dryja channels even if Eltoo was available.
I don't foresee Eltoo taking over the penalty approach entirely, but rather the two living side by side.
(It could theoretically be possible to use APO to open Poon-Dryja channels on top of unstable funding txids, but
having stable txids makes this much more easily integratable with
existing lightning implementations, without the invasive changes that
unstable txids would bring.)
> This has been addressed over and over and over again. If a QC is able overnight to spend a large fraction of> the supply, your coins in your super non-QC-vulnerable-bare-CTV-covenant (that would eventually become
> vulnerable when trying to use it) are worthless.
It
might be the case that a sufficient fraction of supply does switch over
to QC-protected outputs in time, with only some small minority that
didn't actively switch over and with revealed bare pubkeys losing
their funds, which wouldn't make BTC entirely worthless. It makes sense
not to want to be in that minority, ideally without requiring further
time-sensitive active action (esp if considering long-term deep cold
storage for inheritance etc).
(This of course assumes a safe post-QC
mechanism to later spend these funds; IIUC there are some viable
approaches for that using a two-step spending procedure, where you prove
knowledge of the pubkey/script preimage while commiting to a future
tx.)
> Sorry for being sarcastic, but at this point it's not fair to use quantum-computer FUD to justify the
> activation of CTV over APO, or encourage the use of legacy transactions over Taproot ones.
Sorry if it came off as FUDing. I don't know enough to hold a strong opinion
on whether the fear of QCs is justified or not. I know that many people
on this list don't think so, but I also think that this fear is
prevalent enough to warrant taking it into consideration (at least for
features that target long-term SoV use cases; less so for features targeted at L2 MoE applications like lightning spacechains paypools etc).
> you can also use the internal key optimization .. you can't have NUMS-ness then
Right, which makes this unsuitable for the vaulting use case.
> Also, it's not 33 extra vbytes vs CTV-in-segwitv0, but 33 extra * witness units* (8.25 vbytes).
Ugh yes sorry about that! I realized after hitting send and meant to clarify that it should've been s/vbyte/WU/ in my next reply.
> Are APO signatures more expensive to verify? .. the cost for the network of validating signatures already exists today
Not compared to existing signature verifications, but compared to a CTV/TXHASH-like construction.
Can anyone quantify how much of a difference this makes in practice?
> i appreciate your reply and your efforts to explore the tradeoffs between the two approaches.
Thank you, I appreciate your efforts on this too :-)
shesek