> That in argument against adopting the inferior version, as that will
> contribute more momentum to doing it in a way that doesn't make sense long
> term.
That was moreso an attempt at a disclosure, rather than may argument. But
also as noted further up in the thread, both approaches have a trade off:
one is better for light clients in a p2p "one honest peer mode", while the
other is more compact, but is less verifiable for the light clients. They're
"inferior" in different ways.
My argument goes more like: moving to prev scripts means clients cannot
verify in full unless a block message is added to include the prev outs.
This is a downgrade assuming a "one honest peer" model for the p2p
interactions. A commitment removes this drawback, but ofc requires a soft
fork. Soft forks take a "long" time to deploy. So what's the cost in using
the current filter (as it lets the client verify the filter if they want to,
or in an attempted "bamboozlement" scenario) in the short term (as we don't
yet have a proposal for committing the filters) which would allow us to
experiment more with the technique on mainnet before making the step up to
committing the filter. Also, depending on the way the commitment is done,
the filters themselves would need to be modified.
> I don't agree at all, and I can't see why you say so.
Sure it doesn't _have_ to, but from my PoV as "adding more commitments" is
on the top of every developers wish list for additions to Bitcoin, it would
make sense to coordinate on an "ultimate" extensible commitment once, rather
than special case a bunch of distinct commitments. I can see arguments for
either really.
> This is inherent in how e.g. the segwit commitment is encoded, the initial
> bytes are an identifying cookies. Different commitments would have different
> cookies.
Indeed, if the filter were to be committed, using an output on the coinbase
would be a likely candidate. However, I see two issues with this:
1. The current filter format (even moving to prevouts) cannot be committed
in this fashion as it indexes each of the coinbase output scripts. This
creates a circular dependency: the commitment is modified by the
filter, which is modified by the commitment (the filter atm indexes the
commitment). So we'd need to add a special case to skip outputs with a
particular witness magic. However, we don't know what that witness
magic looks like (as there's no proposal). As a result, the type
filters that can be served over the p2p network may be distinct from
the type of filters that are to be committed, as the commitment may
have an impact on the filter itself.
2. Since the coinbase transaction is the first in a block, it has the
longest merkle proof path. As a result, it may be several hundred bytes
(and grows with future capacity increases) to present a proof to the
client. Depending on the composition of blocks, this may outweigh the
gains had from taking advantage of the additional compression the prev
outs allow.
In regards to the second item above, what do you think of the old Tier Nolan
proposal [1] to create a "constant" sized proof for future commitments by
constraining the size of the block and placing the commitments within the
last few transactions in the block?
> but with an added advantage of permitting expirementation ahead of the
> commitment.
Indeed! To my knowledge, lnd is the only software deployed that even has
code to experiment with the filtering proposal in general. Also, as I
pointed out above, we may require an additional modification in order to be
able to commit the filter. The nature of that modification may depend on how
the filter is to be committed. As a result, why hinder experimentation today
(since it might need to be changed anyway, and as you point out the filter
being committed can even be swapped) by delaying until we know what the
commitment will look like?
> You can still scan blocks directly when peers disagree on the filter
> content, regardless of how the filter is constructed
But the difference is that one options lets you fully construct the filter
from a block, while the other requires additional data.
> but it makes the attack ineffective and using outpoints considerably increases
> bandwidth for everyone without an attack
So should we optimize for the ability to validate in a particular model (better
security), or lower bandwidth in this case? It may also be the case that the
overhead of receiving proofs of the commitment outweigh the savings depending
on block composition (ofc entire block that re-uses the same address is super
small).
> It seems to me this point is being overplayed, especially considering the
> current state of non-existing validation in SPV software (if SPV software
> doesn't validate anything else they could be validating, why would they
> implement a considerable amount of logic for this?).
I don't think its fair to compare those that wish to implement this proposal
(and actually do the validation) to the legacy SPV software that to my
knowledge is all but abandoned. The project I work on that seeks to deploy
this proposal (already has, but mainnet support is behind a flag as I
anticipated further modifications) indeed has implemented the "considerable"
amount of logic to check for discrepancies and ban peers trying to bamboozle
the light clients. I'm confident that the other projects seeking to implement
this (rust-bitcoin-spv, NBitcoin, bcoin, maybe missing a few too) won't find it
too difficult to implement "full" validation, as they're bitcoin developers
with quite a bit of experience.
I think we've all learned from the past defects of past light clients, and
don't seek to repeat history by purposefully implementing as little validation
as possible. With these new projects by new authors, I think we have an
opprotunity to implement light clients "correctly" this time around.
-- Laolu