Hi John,

I don’t work on BC, so I can’t speak to the specifics of the implementation, however I’m very familiar with the requirements.

Generally speaking it would be perfectly reasonable to validate independent txs against their previous outputs and later associate them with a confirming header without revalidating them. This would hold for segwit and non-segwit txs.

However, txs must be validated in the context of a given block. That context cannot be fully known until the associating block header is known. In Libbitcoin we refer to the required state for validation as “context”. The context for tx validation includes things like height, mtp, timestamp, and active forks.

Height and mtp are known for the next block (ie are a function of the immediately preceding blocks), but are not applicable to all subsequent blocks. Timestamp and flags are not known until the associating header arrives. So the validation performed under certain context assumptions may not hold under an actual context. Finally there must be a mechanism to control for previous output confirmedness and spentness absent the changes that would otherwise occur to a utxo accumulator.

Revalidating in the actual block context is one approach.

e

On Feb 18, 2025, at 21:42, John <csdarkcounter@gmail.com> wrote:



Hi everyone,

While analyzing the codebase, I observed what seems to be duplicate validation steps for transactions that already exist in the mempool when they appear in new blocks. Specifically, I'm curious if the secondary validation performed during block acceptance could be safely optimized for SegWit-verified transactions using their wtxid hashes.(I'm still working on the source code, and I'm not sure if the source code was validated twice)

I'm particularly seeking clarification on two aspects: First, does the current implementation indeed perform full re-validation of mempool transactions during block processing? Second, if such optimization is theoretically possible, what subtle risks might emerge regarding transaction propagation timing or node synchronization that a newcomer like myself might overlook?

I'd be grateful for any insights about historical design decisions in this area, critical code sections I should study more deeply, or potential pitfalls in this line of thinking. 

Thanks in advance for sharing your expertise,
John

--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/3c2b1e7a-ffcd-41da-a533-2d9224fc016fn%40googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/FDBFDE56-1D05-493D-8BAC-8E4951780CE1%40voskuil.org.