* [bitcoindev] Post Quantum Signatures and Scaling Bitcoin @ 2025-04-04 16:29 Ethan Heilman 2025-04-04 17:17 ` Dustin Ray ` (3 more replies) 0 siblings, 4 replies; 8+ messages in thread From: Ethan Heilman @ 2025-04-04 16:29 UTC (permalink / raw) To: Bitcoin Development Mailing List I strongly believe Bitcoin will need to move to PQ signatures in the near future. The rest of this email is premised on this belief. PQ (Post Quantum) signatures present a problem for Bitcoin: First, they are large. Of the three proposed in BIP-360 [0], the smallest is 1.5kb for the public key + signature [1]. Without a discount this represents a massive reduction in Bitcoin's transaction volume due to the increase in transaction size of Bitcoin payment using such signatures. - Second, even if we discount PQ signatures and public keys so that the maximum number of transactions that can fit in a block is unchanged we still have the problem that these blocks and transactions will be an order of magnitude bigger. If it is the case that we can handle these extra bytes without degrading performance or decentralization, then consider the head room we are giving up that could be used for scalability. Beyond this there is also the risk that techniques could be developed to encode JPEGs and other data in these discounted PQ signatures or public keys. BIP-360 takes steps to make an abuse of this discount more difficult by requiring that a PQ signature and public key can only be written to the blockchain if they verify. We do not need PQ Signatures to be completely “JPEG resistant”, they just need PQ signatures to not enable significantly cheaper storage than payments. The degree to which the proposed PQ signature algorithms resist being repurposed as a storage mechanism is an open question and worth investigating. If it turned out PQ signatures could be used to encode data very cheaply, then Bitcoin faces the dilemma that if you discount PQ signatures, you make the JPEG problem worse and may price out the payment use case. If you don't discount PQ, you price out most people from sending payments in Bitcoin since non-PQ witness data can be used for storage I want to draw the community's attention to a solution that could not only address these problems but also increase Bitcoin’s scalability (and privacy): Non-interactive Transaction Compression (NTC) for Transactions supporting PQ signatures. This is sometimes called Non-Interactive Witness Aggregation (NIWA) [2]. This would require a new transaction type supporting PQ signatures. The miner of a block would then pull out the signatures and hash pointers from transactions to compress transaction data and non-interactively aggregate all the PQ signatures in all the transactions in a block, replacing them with one big STARK (STARKS are a form of SNARK which is PQ). This would make PQ signatures significantly smaller and cheaper than ECDSA and schnorr signatures. Consider the following back of the envelope math: 2 bytes per Input = 2 bytes per TXID, 0 bytes per signature 37 bytes per output = 32 bytes pubkey hash + 5 bytes value (max 2.8m BTC per output) 1-input-2-output transaction would be: 2 + 2*37 = 76 bytes (4,000,000/76)/(60*10) = ~87 txns/sec You could shave some bytes off the value, or add some bytes to the TXID. [3] provides a more detailed estimate, proposing 113.5 weight units (WU) for a 1-input-2-output transaction with no address reuse. However it does not consider TXID compression. If desired an account-based model could push this even further to 12 bytes per transaction per block [4]. This would enable approximately 4,000,000/(12*60*10) = 555 txns/second. A secondary benefit of having on-chain PQ payments only be ~76 bytes in size is that it fundamentally changes the pricing relationship between payments and on-chain JPEG/complex contracts. The problem with on-chain JPEGs is not that they are possible, but that they are price competitive with payments. At ~76 bytes per payment or better yet ~76 bytes per LN channel open/close, JPEGs no longer present the same fee competition to payments as payments become much cheaper. Such a system would present scaling issues for the mempool because prior to aggregation and compression, these transactions would be 2kb to 100kb in size and there would be a lot more of them. It is likely parties producing large numbers of transactions would want to pre-aggregate and compress them in one big many input, many output transactions. Aggregating prior to the miner may have privacy benefits but also scalability benefits as it would enable cut-throughs and very cheap consolidation transactions. ~87/txns a second does not include these additional scalability benefits. Consider an exchange that receives and sends a large number of transactions. For instance between block confirmations customers send the exchange 10 1-input-2-output transactions in deposits and the exchange sends out 10 1-input-2-output transactions in withdrawals. The exchange could consolidate all of the outputs paying the exchange, including chain outputs, into one output and do the same for inputs. This would reduce not just size, but also validation costs. (10 * 2 + 20 * 2 * 37) + (10 * 2 + 20 * 2 * 37) = 3000 bytes becomes (10 * 2 + 11 * 2 * 37) + (2 + 11 * 2 * 37) = 1650 bytes If constructing these proofs turned out to be as expensive as performing POW, it would make block generation not progress free. Essentially you'd have a two step POW: proof generation and then the actual POW. Such a scenario would be very bad and cause the biggest miner to always be the one that generates blocks. A critical assumption I am making is that such proof generation is not particularly expensive in the scheme of POW. I am optimistic that proof generation will not be this expensive for two reasons There are PQ signature schemes which support non-interactive aggregation such as LaBRADOR [5]. Thus, the STARK wouldn’t need to perform the block-wide signature aggregation and would only need to perform transaction compression, cut throughs and consolidation. We could make use of recursive STARKs [8] to allow miners to parallelize proof generation to reduce latency or to decentralize proof generation. Users creating transactions could perform non-interactive coinjoins with other users or settlement/batching. This would not only take proof generation pressure off of the miners and reduce the strain on the mempool but in some circumstances it would provide privacy if used with payjoin techniques like receiver side payment batching [7]. The approach we are proposing treats the STARK the miner produces as free from a blocksize perspective. This is important for bootstrapping because it means that fees are significantly cheaper for a transaction, even if it is the only compressed transaction in the block. This encourages adoption. Adoption helps address the chicken and egg problem of wallets and exchanges not investing engineering resources to support a new transaction type if no one is using it and no one wants to use it because it isn't well supported. By having a single format, built into the block we both accelerate the switch over and prevent a fragmented ecosystem that might arise from doing this in Bitcoin script. Fragmentation reduces the scalability benefits because validators have to validate multiple STARKs and reduces the privacy benefits because there are many coinjoins, rather than each being a coinjoin. Even if our approach here turns out to be infeasible, we need a way to reduce the size of PQ signatures in Bitcoin. The ability to move coins, including the ability to move coins that represent JPEGs, is the main functionality of Bitcoin. If we make storage/JPEG too price competitive with the ability to transfer coins, we destroy that essential functionality and decrease the utility of Bitcoin for everyone. Currently moving coins securely requires at least one 64 byte signature, which is an unfortunate tax on this most vital of all use cases. I believe removing that tax with signature aggregation will be beneficial for all parties. Consider the world of PQ signatures in Bitcoin without STARKs: - The large size of PQ signatures will make it more expensive for users to use them prior to the invention of a CRQC (Cryptographically Relevant Quantum Computer). This means that most outputs will not be protected by PQ signatures. Once a CRQC arises there will be a rush to move funds under the protection of PQ signatures but due to the large size of PQ signatures the fees will be too expensive for most outputs. Users will instead need to move their funds to centralized custodial wallets that can use a small number of outputs. In such a world it will be much harder and expensive to self-custody. - Without a solution here the large sizes of PQ signatures will limit Bitcoin's functionality to move coins using on-chain payments. This will also favor centralized custodians and erode the decentralized nature of Bitcoin. None of this is an argument against adopting BIP-360 or other PQ signatures schemes into Bitcoin. On the contrary, having PQ signatures in Bitcoin would be a useful stepping stone to PQ transaction compression since it would allow us to gain agreement on which PQ signature schemes to build on. Most importantly, in the event of a CRQC being developed it will be far better to have uncompressed PQ signatures in Bitcoin than none at all. Acknowledgements: These ideas arose out of correspondence with Hunter Beast. I want to thank Neha Narula, John Light, Eli Ben-Sasson for their feedback, Jonas Nick for his feedback and his idea to use LaBRADOR for signature aggregation, Tadge Dryja for suggesting the term “JPEG resistance” and his ideas around its feasibility. I had a number of fruitful discussions over lunch with members of the MIT DCI and on the Bitcoin PQ working group. These acknowledgements should not be taken as an agreement with or endorsement of the ideas in this email. [0]: Hunter Beast, BIP-360: QuBit - Pay to Quantum Resistant Hash (2025) https://github.com/bitcoin/bips/pull/1670/files# [1]: Benchmark Report: Post-Quantum Cryptography vs secp256k1 https://github.com/cryptoquick/libbitcoinpqc/blob/main/benches/REPORT.md [2]: Ruben Somsen, SNARKs and the future of blockchains (2020) https://medium.com/@RubenSomsen/snarks-and-the-future-of-blockchains-55b82012452b [3]: John Light, Validity Rollups on Bitcoin (2022) https://github.com/john-light/validity-rollups/blob/main/validity_rollups_on_bitcoin.md [4] Vitalik Buterin, An Incomplete Guide to Rollups (2021) https://vitalik.eth.limo/general/2021/01/05/rollup.html [5]: Aardal, Aranha, Boudgoust, Kolby, Takahashi, Aggregating Falcon Signatures with LaBRADOR (2024) https://eprint.iacr.org/2024/311 [6]: Gidi Kaempfer, Recursive STARKs (2022) https://www.starknet.io/blog/recursive-starks/ [7]: Dan Gould, Interactive Payment Batching is Better (2023) https://payjoin.substack.com/p/interactive-payment-batching-is-better [8] John Tromp, Fee burning and Dynamic Block Size (2018) https://lists.launchpad.net/mimblewimble/msg00450.html -- You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group. To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/CAEM%3Dy%2BXMLuGH-MAfkYanfbU3Ynduw54jDVguKxgO2xEtnSEkZg%40mail.gmail.com. ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [bitcoindev] Post Quantum Signatures and Scaling Bitcoin 2025-04-04 16:29 [bitcoindev] Post Quantum Signatures and Scaling Bitcoin Ethan Heilman @ 2025-04-04 17:17 ` Dustin Ray 2025-04-05 20:40 ` 'Eli Ben-Sasson' via Bitcoin Development Mailing List 2025-04-04 18:43 ` Brandon Black ` (2 subsequent siblings) 3 siblings, 1 reply; 8+ messages in thread From: Dustin Ray @ 2025-04-04 17:17 UTC (permalink / raw) To: Ethan Heilman; +Cc: Bitcoin Development Mailing List [-- Attachment #1: Type: text/plain, Size: 13139 bytes --] This is a great post, thank you for sharing. I have one small tiny comment that may or may not be relevant, but there is an existing gap in the literature for a security proof that STARKs (rather FRI, the underlying commitment scheme) is secure in a quantum adversary model. We conjecture that it is, because it relies only on hashes as the primitive in an error correcting code, but unlike other cryptographic primitives used or proposed for critical security infrastructure, there is currently no formal security argument for FRI against a quantum threat model that I am aware of. I'm not sure how much this matters, but some may argue that stronger security arguments are warranted for any potential change to the bitcoin signature model in a PQ landscape. That's just my two cents anyways. On Fri, Apr 4, 2025 at 9:34 AM Ethan Heilman <eth3rs@gmail.com> wrote: > I strongly believe Bitcoin will need to move to PQ signatures in the > near future. The rest of this email is premised on this belief. > > PQ (Post Quantum) signatures present a problem for Bitcoin: > > First, they are large. Of the three proposed in BIP-360 [0], the > smallest is 1.5kb for the public key + signature [1]. Without a > discount this represents a massive reduction in Bitcoin's transaction > volume due to the increase in transaction size of Bitcoin payment > using such signatures. > - Second, even if we discount PQ signatures and public keys so that > the maximum number of transactions that can fit in a block is > unchanged we still have the problem that these blocks and transactions > will be an order of magnitude bigger. If it is the case that we can > handle these extra bytes without degrading performance or > decentralization, then consider the head room we are giving up that > could be used for scalability. > > Beyond this there is also the risk that techniques could be developed > to encode JPEGs and other data in these discounted PQ signatures or > public keys. BIP-360 takes steps to make an abuse of this discount > more difficult by requiring that a PQ signature and public key can > only be written to the blockchain if they verify. We do not need PQ > Signatures to be completely “JPEG resistant”, they just need PQ > signatures to not enable significantly cheaper storage than payments. > The degree to which the proposed PQ signature algorithms resist being > repurposed as a storage mechanism is an open question and worth > investigating. > > If it turned out PQ signatures could be used to encode data very > cheaply, then Bitcoin faces the dilemma that if you discount PQ > signatures, you make the JPEG problem worse and may price out the > payment use case. If you don't discount PQ, you price out most people > from sending payments in Bitcoin since non-PQ witness data can be used > for storage > > I want to draw the community's attention to a solution that could not > only address these problems but also increase Bitcoin’s scalability > (and privacy): > > Non-interactive Transaction Compression (NTC) for Transactions > supporting PQ signatures. This is sometimes called Non-Interactive > Witness Aggregation (NIWA) [2]. > > This would require a new transaction type supporting PQ signatures. > The miner of a block would then pull out the signatures and hash > pointers from transactions to compress transaction data and > non-interactively aggregate all the PQ signatures in all the > transactions in a block, replacing them with one big STARK (STARKS are > a form of SNARK which is PQ). This would make PQ signatures > significantly smaller and cheaper than ECDSA and schnorr signatures. > > Consider the following back of the envelope math: > > 2 bytes per Input = 2 bytes per TXID, 0 bytes per signature > 37 bytes per output = 32 bytes pubkey hash + 5 bytes value (max 2.8m > BTC per output) > > 1-input-2-output transaction would be: 2 + 2*37 = 76 bytes > (4,000,000/76)/(60*10) = ~87 txns/sec > > You could shave some bytes off the value, or add some bytes to the > TXID. [3] provides a more detailed estimate, proposing 113.5 weight > units (WU) for a 1-input-2-output transaction with no address reuse. > However it does not consider TXID compression. If desired an > account-based model could push this even further to 12 bytes per > transaction per block [4]. This would enable approximately > 4,000,000/(12*60*10) = 555 txns/second. > > A secondary benefit of having on-chain PQ payments only be ~76 bytes > in size is that it fundamentally changes the pricing relationship > between payments and on-chain JPEG/complex contracts. The problem with > on-chain JPEGs is not that they are possible, but that they are price > competitive with payments. At ~76 bytes per payment or better yet ~76 > bytes per LN channel open/close, JPEGs no longer present the same fee > competition to payments as payments become much cheaper. > > Such a system would present scaling issues for the mempool because > prior to aggregation and compression, these transactions would be 2kb > to 100kb in size and there would be a lot more of them. It is likely > parties producing large numbers of transactions would want to > pre-aggregate and compress them in one big many input, many output > transactions. Aggregating prior to the miner may have privacy benefits > but also scalability benefits as it would enable cut-throughs and very > cheap consolidation transactions. ~87/txns a second does not include > these additional scalability benefits. > > Consider an exchange that receives and sends a large number of > transactions. For instance between block confirmations customers send > the exchange 10 1-input-2-output transactions in deposits and the > exchange sends out 10 1-input-2-output transactions in withdrawals. > The exchange could consolidate all of the outputs paying the exchange, > including chain outputs, into one output and do the same for inputs. > This would reduce not just size, but also validation costs. > > (10 * 2 + 20 * 2 * 37) + (10 * 2 + 20 * 2 * 37) = 3000 bytes > becomes > (10 * 2 + 11 * 2 * 37) + (2 + 11 * 2 * 37) = 1650 bytes > > If constructing these proofs turned out to be as expensive as > performing POW, it would make block generation not progress free. > Essentially you'd have a two step POW: proof generation and then the > actual POW. Such a scenario would be very bad and cause the biggest > miner to always be the one that generates blocks. A critical > assumption I am making is that such proof generation is not > particularly expensive in the scheme of POW. I am optimistic that > proof generation will not be this expensive for two reasons > > There are PQ signature schemes which support non-interactive > aggregation such as LaBRADOR [5]. Thus, the STARK wouldn’t need to > perform the block-wide signature aggregation and would only need to > perform transaction compression, cut throughs and consolidation. > > We could make use of recursive STARKs [8] to allow miners to > parallelize proof generation to reduce latency or to decentralize > proof generation. Users creating transactions could perform > non-interactive coinjoins with other users or settlement/batching. > This would not only take proof generation pressure off of the miners > and reduce the strain on the mempool but in some circumstances it > would provide privacy if used with payjoin techniques like receiver > side payment batching [7]. > > The approach we are proposing treats the STARK the miner produces as > free from a blocksize perspective. This is important for bootstrapping > because it means that fees are significantly cheaper for a > transaction, even if it is the only compressed transaction in the > block. This encourages adoption. Adoption helps address the chicken > and egg problem of wallets and exchanges not investing engineering > resources to support a new transaction type if no one is using it and > no one wants to use it because it isn't well supported. By having a > single format, built into the block we both accelerate the switch over > and prevent a fragmented ecosystem that might arise from doing this in > Bitcoin script. Fragmentation reduces the scalability benefits because > validators have to validate multiple STARKs and reduces the privacy > benefits because there are many coinjoins, rather than each being a > coinjoin. > > Even if our approach here turns out to be infeasible, we need a way to > reduce the size of PQ signatures in Bitcoin. The ability to move > coins, including the ability to move coins that represent JPEGs, is > the main functionality of Bitcoin. If we make storage/JPEG too price > competitive with the ability to transfer coins, we destroy that > essential functionality and decrease the utility of Bitcoin for > everyone. Currently moving coins securely requires at least one 64 > byte signature, which is an unfortunate tax on this most vital of all > use cases. I believe removing that tax with signature aggregation will > be beneficial for all parties. > > Consider the world of PQ signatures in Bitcoin without STARKs: > - The large size of PQ signatures will make it more expensive for > users to use them prior to the invention of a CRQC (Cryptographically > Relevant Quantum Computer). This means that most outputs will not be > protected by PQ signatures. Once a CRQC arises there will be a rush to > move funds under the protection of PQ signatures but due to the large > size of PQ signatures the fees will be too expensive for most outputs. > Users will instead need to move their funds to centralized custodial > wallets that can use a small number of outputs. In such a world it > will be much harder and expensive to self-custody. > - Without a solution here the large sizes of PQ signatures will limit > Bitcoin's functionality to move coins using on-chain payments. This > will also favor centralized custodians and erode the decentralized > nature of Bitcoin. > > None of this is an argument against adopting BIP-360 or other PQ > signatures schemes into Bitcoin. On the contrary, having PQ signatures > in Bitcoin would be a useful stepping stone to PQ transaction > compression since it would allow us to gain agreement on which PQ > signature schemes to build on. Most importantly, in the event of a > CRQC being developed it will be far better to have uncompressed PQ > signatures in Bitcoin than none at all. > > Acknowledgements: > These ideas arose out of correspondence with Hunter Beast. I want to > thank Neha Narula, John Light, Eli Ben-Sasson for their feedback, > Jonas Nick for his feedback and his idea to use LaBRADOR for signature > aggregation, Tadge Dryja for suggesting the term “JPEG resistance” and > his ideas around its feasibility. I had a number of fruitful > discussions over lunch with members of the MIT DCI and on the Bitcoin > PQ working group. These acknowledgements should not be taken as an > agreement with or endorsement of the ideas in this email. > > [0]: Hunter Beast, BIP-360: QuBit - Pay to Quantum Resistant Hash > (2025) https://github.com/bitcoin/bips/pull/1670/files# > [1]: Benchmark Report: Post-Quantum Cryptography vs secp256k1 > https://github.com/cryptoquick/libbitcoinpqc/blob/main/benches/REPORT.md > [2]: Ruben Somsen, SNARKs and the future of blockchains (2020) > > https://medium.com/@RubenSomsen/snarks-and-the-future-of-blockchains-55b82012452b > [3]: John Light, Validity Rollups on Bitcoin (2022) > > https://github.com/john-light/validity-rollups/blob/main/validity_rollups_on_bitcoin.md > [4] Vitalik Buterin, An Incomplete Guide to Rollups (2021) > https://vitalik.eth.limo/general/2021/01/05/rollup.html > [5]: Aardal, Aranha, Boudgoust, Kolby, Takahashi, Aggregating Falcon > Signatures with LaBRADOR (2024) https://eprint.iacr.org/2024/311 > [6]: Gidi Kaempfer, Recursive STARKs (2022) > https://www.starknet.io/blog/recursive-starks/ > [7]: Dan Gould, Interactive Payment Batching is Better (2023) > https://payjoin.substack.com/p/interactive-payment-batching-is-better > [8] John Tromp, Fee burning and Dynamic Block Size (2018) > https://lists.launchpad.net/mimblewimble/msg00450.html > > -- > You received this message because you are subscribed to the Google Groups > "Bitcoin Development Mailing List" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to bitcoindev+unsubscribe@googlegroups.com. > To view this discussion visit > https://groups.google.com/d/msgid/bitcoindev/CAEM%3Dy%2BXMLuGH-MAfkYanfbU3Ynduw54jDVguKxgO2xEtnSEkZg%40mail.gmail.com > . > -- You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group. To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/CAC3UE4K7AG96Njra3WSnt%3D1yPVZSnT7gktnwktaumPgOD0hU8Q%40mail.gmail.com. [-- Attachment #2: Type: text/html, Size: 15446 bytes --] ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [bitcoindev] Post Quantum Signatures and Scaling Bitcoin 2025-04-04 17:17 ` Dustin Ray @ 2025-04-05 20:40 ` 'Eli Ben-Sasson' via Bitcoin Development Mailing List 0 siblings, 0 replies; 8+ messages in thread From: 'Eli Ben-Sasson' via Bitcoin Development Mailing List @ 2025-04-05 20:40 UTC (permalink / raw) To: Dustin Ray; +Cc: Ethan Heilman, Bitcoin Development Mailing List [-- Attachment #1: Type: text/plain, Size: 14231 bytes --] I believe there are quite a few works discussing FRI and STARKs in quantum oracle model, e.g.: https://doi.org/10.48550/arXiv.2411.05360 https://doi.org/10.1109/FOCS52979.2021.00014 On Fri, Apr 4, 2025 at 8:27 PM Dustin Ray <dustinvonsandwich@gmail.com> wrote: > This is a great post, thank you for sharing. I have one small tiny comment > that may or may not be relevant, but there is an existing gap in the > literature for a security proof that STARKs (rather FRI, the underlying > commitment scheme) is secure in a quantum adversary model. We conjecture > that it is, because it relies only on hashes as the primitive in an error > correcting code, but unlike other cryptographic primitives used or proposed > for critical security infrastructure, there is currently no formal security > argument for FRI against a quantum threat model that I am aware of. I'm not > sure how much this matters, but some may argue that stronger security > arguments are warranted for any potential change to the bitcoin signature > model in a PQ landscape. That's just my two cents anyways. > > On Fri, Apr 4, 2025 at 9:34 AM Ethan Heilman <eth3rs@gmail.com> wrote: > >> I strongly believe Bitcoin will need to move to PQ signatures in the >> near future. The rest of this email is premised on this belief. >> >> PQ (Post Quantum) signatures present a problem for Bitcoin: >> >> First, they are large. Of the three proposed in BIP-360 [0], the >> smallest is 1.5kb for the public key + signature [1]. Without a >> discount this represents a massive reduction in Bitcoin's transaction >> volume due to the increase in transaction size of Bitcoin payment >> using such signatures. >> - Second, even if we discount PQ signatures and public keys so that >> the maximum number of transactions that can fit in a block is >> unchanged we still have the problem that these blocks and transactions >> will be an order of magnitude bigger. If it is the case that we can >> handle these extra bytes without degrading performance or >> decentralization, then consider the head room we are giving up that >> could be used for scalability. >> >> Beyond this there is also the risk that techniques could be developed >> to encode JPEGs and other data in these discounted PQ signatures or >> public keys. BIP-360 takes steps to make an abuse of this discount >> more difficult by requiring that a PQ signature and public key can >> only be written to the blockchain if they verify. We do not need PQ >> Signatures to be completely “JPEG resistant”, they just need PQ >> signatures to not enable significantly cheaper storage than payments. >> The degree to which the proposed PQ signature algorithms resist being >> repurposed as a storage mechanism is an open question and worth >> investigating. >> >> If it turned out PQ signatures could be used to encode data very >> cheaply, then Bitcoin faces the dilemma that if you discount PQ >> signatures, you make the JPEG problem worse and may price out the >> payment use case. If you don't discount PQ, you price out most people >> from sending payments in Bitcoin since non-PQ witness data can be used >> for storage >> >> I want to draw the community's attention to a solution that could not >> only address these problems but also increase Bitcoin’s scalability >> (and privacy): >> >> Non-interactive Transaction Compression (NTC) for Transactions >> supporting PQ signatures. This is sometimes called Non-Interactive >> Witness Aggregation (NIWA) [2]. >> >> This would require a new transaction type supporting PQ signatures. >> The miner of a block would then pull out the signatures and hash >> pointers from transactions to compress transaction data and >> non-interactively aggregate all the PQ signatures in all the >> transactions in a block, replacing them with one big STARK (STARKS are >> a form of SNARK which is PQ). This would make PQ signatures >> significantly smaller and cheaper than ECDSA and schnorr signatures. >> >> Consider the following back of the envelope math: >> >> 2 bytes per Input = 2 bytes per TXID, 0 bytes per signature >> 37 bytes per output = 32 bytes pubkey hash + 5 bytes value (max 2.8m >> BTC per output) >> >> 1-input-2-output transaction would be: 2 + 2*37 = 76 bytes >> (4,000,000/76)/(60*10) = ~87 txns/sec >> >> You could shave some bytes off the value, or add some bytes to the >> TXID. [3] provides a more detailed estimate, proposing 113.5 weight >> units (WU) for a 1-input-2-output transaction with no address reuse. >> However it does not consider TXID compression. If desired an >> account-based model could push this even further to 12 bytes per >> transaction per block [4]. This would enable approximately >> 4,000,000/(12*60*10) = 555 txns/second. >> >> A secondary benefit of having on-chain PQ payments only be ~76 bytes >> in size is that it fundamentally changes the pricing relationship >> between payments and on-chain JPEG/complex contracts. The problem with >> on-chain JPEGs is not that they are possible, but that they are price >> competitive with payments. At ~76 bytes per payment or better yet ~76 >> bytes per LN channel open/close, JPEGs no longer present the same fee >> competition to payments as payments become much cheaper. >> >> Such a system would present scaling issues for the mempool because >> prior to aggregation and compression, these transactions would be 2kb >> to 100kb in size and there would be a lot more of them. It is likely >> parties producing large numbers of transactions would want to >> pre-aggregate and compress them in one big many input, many output >> transactions. Aggregating prior to the miner may have privacy benefits >> but also scalability benefits as it would enable cut-throughs and very >> cheap consolidation transactions. ~87/txns a second does not include >> these additional scalability benefits. >> >> Consider an exchange that receives and sends a large number of >> transactions. For instance between block confirmations customers send >> the exchange 10 1-input-2-output transactions in deposits and the >> exchange sends out 10 1-input-2-output transactions in withdrawals. >> The exchange could consolidate all of the outputs paying the exchange, >> including chain outputs, into one output and do the same for inputs. >> This would reduce not just size, but also validation costs. >> >> (10 * 2 + 20 * 2 * 37) + (10 * 2 + 20 * 2 * 37) = 3000 bytes >> becomes >> (10 * 2 + 11 * 2 * 37) + (2 + 11 * 2 * 37) = 1650 bytes >> >> If constructing these proofs turned out to be as expensive as >> performing POW, it would make block generation not progress free. >> Essentially you'd have a two step POW: proof generation and then the >> actual POW. Such a scenario would be very bad and cause the biggest >> miner to always be the one that generates blocks. A critical >> assumption I am making is that such proof generation is not >> particularly expensive in the scheme of POW. I am optimistic that >> proof generation will not be this expensive for two reasons >> >> There are PQ signature schemes which support non-interactive >> aggregation such as LaBRADOR [5]. Thus, the STARK wouldn’t need to >> perform the block-wide signature aggregation and would only need to >> perform transaction compression, cut throughs and consolidation. >> >> We could make use of recursive STARKs [8] to allow miners to >> parallelize proof generation to reduce latency or to decentralize >> proof generation. Users creating transactions could perform >> non-interactive coinjoins with other users or settlement/batching. >> This would not only take proof generation pressure off of the miners >> and reduce the strain on the mempool but in some circumstances it >> would provide privacy if used with payjoin techniques like receiver >> side payment batching [7]. >> >> The approach we are proposing treats the STARK the miner produces as >> free from a blocksize perspective. This is important for bootstrapping >> because it means that fees are significantly cheaper for a >> transaction, even if it is the only compressed transaction in the >> block. This encourages adoption. Adoption helps address the chicken >> and egg problem of wallets and exchanges not investing engineering >> resources to support a new transaction type if no one is using it and >> no one wants to use it because it isn't well supported. By having a >> single format, built into the block we both accelerate the switch over >> and prevent a fragmented ecosystem that might arise from doing this in >> Bitcoin script. Fragmentation reduces the scalability benefits because >> validators have to validate multiple STARKs and reduces the privacy >> benefits because there are many coinjoins, rather than each being a >> coinjoin. >> >> Even if our approach here turns out to be infeasible, we need a way to >> reduce the size of PQ signatures in Bitcoin. The ability to move >> coins, including the ability to move coins that represent JPEGs, is >> the main functionality of Bitcoin. If we make storage/JPEG too price >> competitive with the ability to transfer coins, we destroy that >> essential functionality and decrease the utility of Bitcoin for >> everyone. Currently moving coins securely requires at least one 64 >> byte signature, which is an unfortunate tax on this most vital of all >> use cases. I believe removing that tax with signature aggregation will >> be beneficial for all parties. >> >> Consider the world of PQ signatures in Bitcoin without STARKs: >> - The large size of PQ signatures will make it more expensive for >> users to use them prior to the invention of a CRQC (Cryptographically >> Relevant Quantum Computer). This means that most outputs will not be >> protected by PQ signatures. Once a CRQC arises there will be a rush to >> move funds under the protection of PQ signatures but due to the large >> size of PQ signatures the fees will be too expensive for most outputs. >> Users will instead need to move their funds to centralized custodial >> wallets that can use a small number of outputs. In such a world it >> will be much harder and expensive to self-custody. >> - Without a solution here the large sizes of PQ signatures will limit >> Bitcoin's functionality to move coins using on-chain payments. This >> will also favor centralized custodians and erode the decentralized >> nature of Bitcoin. >> >> None of this is an argument against adopting BIP-360 or other PQ >> signatures schemes into Bitcoin. On the contrary, having PQ signatures >> in Bitcoin would be a useful stepping stone to PQ transaction >> compression since it would allow us to gain agreement on which PQ >> signature schemes to build on. Most importantly, in the event of a >> CRQC being developed it will be far better to have uncompressed PQ >> signatures in Bitcoin than none at all. >> >> Acknowledgements: >> These ideas arose out of correspondence with Hunter Beast. I want to >> thank Neha Narula, John Light, Eli Ben-Sasson for their feedback, >> Jonas Nick for his feedback and his idea to use LaBRADOR for signature >> aggregation, Tadge Dryja for suggesting the term “JPEG resistance” and >> his ideas around its feasibility. I had a number of fruitful >> discussions over lunch with members of the MIT DCI and on the Bitcoin >> PQ working group. These acknowledgements should not be taken as an >> agreement with or endorsement of the ideas in this email. >> >> [0]: Hunter Beast, BIP-360: QuBit - Pay to Quantum Resistant Hash >> (2025) https://github.com/bitcoin/bips/pull/1670/files# >> [1]: Benchmark Report: Post-Quantum Cryptography vs secp256k1 >> https://github.com/cryptoquick/libbitcoinpqc/blob/main/benches/REPORT.md >> [2]: Ruben Somsen, SNARKs and the future of blockchains (2020) >> >> https://medium.com/@RubenSomsen/snarks-and-the-future-of-blockchains-55b82012452b >> [3]: John Light, Validity Rollups on Bitcoin (2022) >> >> https://github.com/john-light/validity-rollups/blob/main/validity_rollups_on_bitcoin.md >> [4] Vitalik Buterin, An Incomplete Guide to Rollups (2021) >> https://vitalik.eth.limo/general/2021/01/05/rollup.html >> [5]: Aardal, Aranha, Boudgoust, Kolby, Takahashi, Aggregating Falcon >> Signatures with LaBRADOR (2024) https://eprint.iacr.org/2024/311 >> [6]: Gidi Kaempfer, Recursive STARKs (2022) >> https://www.starknet.io/blog/recursive-starks/ >> [7]: Dan Gould, Interactive Payment Batching is Better (2023) >> https://payjoin.substack.com/p/interactive-payment-batching-is-better >> [8] John Tromp, Fee burning and Dynamic Block Size (2018) >> https://lists.launchpad.net/mimblewimble/msg00450.html >> >> -- >> You received this message because you are subscribed to the Google Groups >> "Bitcoin Development Mailing List" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to bitcoindev+unsubscribe@googlegroups.com. >> To view this discussion visit >> https://groups.google.com/d/msgid/bitcoindev/CAEM%3Dy%2BXMLuGH-MAfkYanfbU3Ynduw54jDVguKxgO2xEtnSEkZg%40mail.gmail.com >> . >> > -- > You received this message because you are subscribed to the Google Groups > "Bitcoin Development Mailing List" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to bitcoindev+unsubscribe@googlegroups.com. > To view this discussion visit > https://groups.google.com/d/msgid/bitcoindev/CAC3UE4K7AG96Njra3WSnt%3D1yPVZSnT7gktnwktaumPgOD0hU8Q%40mail.gmail.com > <https://groups.google.com/d/msgid/bitcoindev/CAC3UE4K7AG96Njra3WSnt%3D1yPVZSnT7gktnwktaumPgOD0hU8Q%40mail.gmail.com?utm_medium=email&utm_source=footer> > . > -- You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group. To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/CAHLj6%3DOxUUempwziiFdo2eqEhiTHsobgdT%2BeDDx_y64bHBGiqA%40mail.gmail.com. [-- Attachment #2: Type: text/html, Size: 16841 bytes --] ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [bitcoindev] Post Quantum Signatures and Scaling Bitcoin 2025-04-04 16:29 [bitcoindev] Post Quantum Signatures and Scaling Bitcoin Ethan Heilman 2025-04-04 17:17 ` Dustin Ray @ 2025-04-04 18:43 ` Brandon Black 2025-04-04 19:22 ` Ethan Heilman 2025-04-05 17:39 ` Matt Corallo 2025-04-14 13:47 ` Pieter Wuille 3 siblings, 1 reply; 8+ messages in thread From: Brandon Black @ 2025-04-04 18:43 UTC (permalink / raw) To: Ethan Heilman; +Cc: Bitcoin Development Mailing List Hi Ethan, Interesting idea for bringing PQ cryptography to bitcoin without sacrificing throughput or IBD cost. On 2025-04-04 (Fri) at 12:29:46 -0400, Ethan Heilman wrote: > Such a system would present scaling issues for the mempool because > prior to aggregation and compression, these transactions would be 2kb > to 100kb in size and there would be a lot more of them. It is likely > parties producing large numbers of transactions would want to > pre-aggregate and compress them in one big many input, many output > transactions. Aggregating prior to the miner may have privacy benefits > but also scalability benefits as it would enable cut-throughs and very > cheap consolidation transactions. ~87/txns a second does not include > these additional scalability benefits. I'm curious your thoughts on the long term incentive changes for node runners of such a scheme. Currently, running a node in full relay vs. blocks only isn't a significant resource difference. Only the smallest of nodes operate in blocks only mode afaik. With a scheme like this, the delta would expand significantly, potentially weakening the transaction relay network. Such a disjunction between the cost of transaction verification during relay vs. during block validation also represents a further externality imposed on node runners which is not compensated (as node runners do not gain fees for verifying and relaying transactions, and their primary benefit comes in the form of finality by verifying blocks). All the best, -- --Brandon -- You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group. To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/Z_AoU94vMDskLJ4Z%40console. ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [bitcoindev] Post Quantum Signatures and Scaling Bitcoin 2025-04-04 18:43 ` Brandon Black @ 2025-04-04 19:22 ` Ethan Heilman 0 siblings, 0 replies; 8+ messages in thread From: Ethan Heilman @ 2025-04-04 19:22 UTC (permalink / raw) To: Brandon Black; +Cc: Bitcoin Development Mailing List > I'm curious your thoughts on the long term incentive changes for node runners of such a scheme. > [...] Such a disjunction between the cost of transaction verification during relay vs. during block validation also represents a further externality imposed on node runners which is not compensated (as node runners do not gain fees for verifying and relaying transactions, and their primary benefit comes in the form of finality by verifying blocks). That's an important question to figure out. I don't frame the problem as the difference in costs between running a full relay or block only. If we could make blocks-only nodes free to run without changing the costs for full relay, I'd be in favor of that. I frame it as running a full relay shouldn't be too expensive, hopefully no more expensive than it is today. In theory the transaction aggregation approach could help in two ways: 1. If most transactions are aggregated prior to entering the mempool then we might be able to reduce verification costs for full relay and only slightly increase the bandwidth costs. 2. Relay nodes could do the aggregation themselves for users and collect fees for performing this service and performing relay. The fact that aggregation is one way, once a relay node performs it, another relay could not pull the transactions together, allowing each step in the aggregation pipe to collect fees. I don't have a detailed design for how this would work. Do you have any thoughts on how such a design would work? On Fri, Apr 4, 2025 at 2:43 PM Brandon Black <freedom@reardencode.com> wrote: > > Hi Ethan, > > Interesting idea for bringing PQ cryptography to bitcoin without > sacrificing throughput or IBD cost. > > On 2025-04-04 (Fri) at 12:29:46 -0400, Ethan Heilman wrote: > > Such a system would present scaling issues for the mempool because > > prior to aggregation and compression, these transactions would be 2kb > > to 100kb in size and there would be a lot more of them. It is likely > > parties producing large numbers of transactions would want to > > pre-aggregate and compress them in one big many input, many output > > transactions. Aggregating prior to the miner may have privacy benefits > > but also scalability benefits as it would enable cut-throughs and very > > cheap consolidation transactions. ~87/txns a second does not include > > these additional scalability benefits. > > I'm curious your thoughts on the long term incentive changes for node > runners of such a scheme. > > Currently, running a node in full relay vs. blocks only isn't a > significant resource difference. Only the smallest of nodes operate in > blocks only mode afaik. With a scheme like this, the delta would expand > significantly, potentially weakening the transaction relay network. > > Such a disjunction between the cost of transaction verification during > relay vs. during block validation also represents a further externality > imposed on node runners which is not compensated (as node runners do not > gain fees for verifying and relaying transactions, and their primary > benefit comes in the form of finality by verifying blocks). > > All the best, > > -- > --Brandon -- You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group. To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/CAEM%3Dy%2BUtU_FTX-bc6uRmJ1iwk_cNwQJOe-d0hGBrawewNiimJg%40mail.gmail.com. ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [bitcoindev] Post Quantum Signatures and Scaling Bitcoin 2025-04-04 16:29 [bitcoindev] Post Quantum Signatures and Scaling Bitcoin Ethan Heilman 2025-04-04 17:17 ` Dustin Ray 2025-04-04 18:43 ` Brandon Black @ 2025-04-05 17:39 ` Matt Corallo 2025-04-14 13:47 ` Pieter Wuille 3 siblings, 0 replies; 8+ messages in thread From: Matt Corallo @ 2025-04-05 17:39 UTC (permalink / raw) To: Ethan Heilman, Bitcoin Development Mailing List On 4/4/25 12:29 PM, Ethan Heilman wrote: > I strongly believe Bitcoin will need to move to PQ signatures in the > near future. The rest of this email is premised on this belief. Whether this is true or not, none of the non-hash-based PQC signature schemes seem like reasonable candidates to include in Bitcoin's consensus today (as far as I'm aware no sensible cryptographer anywhere is suggesting migrating to only PQC schemes given the likelihood that they end up broken by classical methods at some point in the next decade, which makes putting them in Bitcoin's consensus for the long term an incredibly questionable idea). Worse, making STARKs a part of Bitcoin's security assumption seems even more far-fetched. While this is all cool, I don't really see how this is a viable path any time in the next decade, sadly. If you believe that we need a PQ signature scheme in the near future, it seems to me the only option is something hash-based and we eat the cost (optionally, see eg [1]). [1] https://groups.google.com/g/bitcoindev/c/oQKezDOc4us/m/F-Pq-Jw2AgAJ -- You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group. To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/76c5ec26-5fd5-4746-86ed-89d2c8e28cbc%40mattcorallo.com. ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [bitcoindev] Post Quantum Signatures and Scaling Bitcoin 2025-04-04 16:29 [bitcoindev] Post Quantum Signatures and Scaling Bitcoin Ethan Heilman ` (2 preceding siblings ...) 2025-04-05 17:39 ` Matt Corallo @ 2025-04-14 13:47 ` Pieter Wuille 2025-04-14 19:35 ` Ethan Heilman 3 siblings, 1 reply; 8+ messages in thread From: Pieter Wuille @ 2025-04-14 13:47 UTC (permalink / raw) To: Ethan Heilman; +Cc: Bitcoin Development Mailing List Hi Ethan, thank you bringing this up. I'm unconvinced about the practicality, but I'm happy to see thinking and discussion in this area. Two points addressed below: On Friday, April 4th, 2025 at 12:29 PM, Ethan Heilman <eth3rs@gmail.com> wrote: > If it is the case that we can > handle these extra bytes without degrading performance or > decentralization, then consider the head room we are giving up that > could be used for scalability. I don't disagree with the overall point raised here, but I do think it's worth distinguishing between the "size" (bandwidth/storage) and "computation" (CPU/IO) aspects of scalability. If it turns out to be the case that PQ schemes need more on-chain size, but have lower per-byte computation cost, a reasonable argument could be made that a higher discount factor for PQ data is acceptable. I don't know what the trade-off here ought to be, and this does not diminish your "JPEG resistance" argument, but I did want to point out that just counting size isn't the only constraint here. > Such a system would present scaling issues for the mempool because > prior to aggregation and compression, these transactions would be 2kb > to 100kb in size and there would be a lot more of them. It is likely > parties producing large numbers of transactions would want to > pre-aggregate and compress them in one big many input, many output > transactions. Aggregating prior to the miner may have privacy benefits > but also scalability benefits as it would enable cut-throughs and very > cheap consolidation transactions. ~87/txns a second does not include > these additional scalability benefits. I don't think pre-aggregation (beyond a single-transaction-wide one) is realistic, as it effectively breaks in-mempool transaction replacement, turning every pre-aggregated group of transactions that is being relayed together into an atomic package that must be taken or not as a whole. Consider for example the case where transactions P, C1, and C2 are relayed, with C1 and C2 depending on P. One node sees P and C1, but not C2, they may pre-aggregate prior to relay. Another node sees P and C2, but not C1, they may pre-aggregate those prior to relay. These two packages (P+C1, P+C2) cannot be combined, so we've effectively forced the network/miners to choose between one of C1 or C2, unless the individual transactions are still available somewhere. I fear this is a very fast way to cause mining without direct-to-miner transaction submission from users to become uncompetitive, making entering the mining business permissioned, and effectively removing the point of having a decentralized consensus mechanism in the first place. -- Pieter -- You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group. To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/p8kWp-qhHYIB-nMWGHI5GJ65j2Ve_apGJXG3QByimJrGHKcyrfZII1OG0I40KJMCyeV-HDuhLfg-29S3nfKu1k9cUbvtJ_N5n2x9jmopRxA%3D%40wuille.net. ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [bitcoindev] Post Quantum Signatures and Scaling Bitcoin 2025-04-14 13:47 ` Pieter Wuille @ 2025-04-14 19:35 ` Ethan Heilman 0 siblings, 0 replies; 8+ messages in thread From: Ethan Heilman @ 2025-04-14 19:35 UTC (permalink / raw) To: Pieter Wuille; +Cc: Bitcoin Development Mailing List > I'm happy to see thinking and discussion in this area. Getting this discussion going was exactly my intent. I'm not presenting so much a solution as we might want to do this at some point what are problems and can we solve them? > If it turns out to be the case that PQ schemes need more on-chain size, but have lower per-byte computation cost, a reasonable argument could be made that a higher discount factor for PQ data is acceptable. I was focused on size because computation is pretty great for most PQ signature schemes. PQ signatures are far cheaper to validate per byte and according to BIP-360 Falcon is cheaper than edDSA per signature verification. EdDSA Cycles to verify: 130,000 FALCON-512 Cycles to verify: 81,036 This is one of the reasons I am very optimistic that Bitcoin will move to post-quantum signatures. If research shows that these signature schemes are sufficiently JPEG resistant, and I think it will, then a discount is very attractive. > I don't think pre-aggregation (beyond a single-transaction-wide one) is realistic, as it effectively breaks in-mempool transaction replacement, turning every pre-aggregated group of transactions that is being relayed together into an atomic package that must be taken or not as a whole. In some circumstances it is possible you could aggregate (P+C1, P+C2) into (P+C1+C2). If you can prove that P is the same in both transactions thus the balance and authentication properties are maintained. However I think what you have described is the shape of the problem we need to solve. Consider transactions: T1, T1', T2, T3, T4, T5 where T1 and T1' are double spends, i.e., spend the same output to different outputs. If half the mempool aggregates TA = (T1, T2, T3) and the other half aggregates TB = (T1', T4, T5). TA and TB are mutually exclusive and transactions are needlessly dropped on the floor. This is a currently existing griefing vector with coinjoins today and is an issue with mimblewimble aggregation. I don't think we have seen it abused much, but that doesn't mean we can ignore it. I believe this is a solvable problem, but it requires careful thought and I haven't seen a fully baked answer. What follows is my intuition on how this might be solved. Approach one: have relay nodes share a map of what UTXOs would be spent by their mempool prior to performing an aggregation to detect and resolve doublespends. Approach two: allow an aggregator to non-interactively aggregate a set of transactions if they are the sender or receiver of funds in all the transactions they are aggregating. My biggest concern here is a conflict between aggregator/relay incentives and miner incentives that either causes miners to be aggregators or reduces the profitability of miners. This conflict arises from the fact that, unless prevented by the protocol, an aggregator can aggregate high fee transactions with low fee transactions in such a way as to reduce miner fees and possibility make fees for themselves. For the sake of example assume blocksize allows only two transactions per block: T1 has a 100 sat/vb fee T2 has a 100 sat/vb fee T3 has a 50 sat/vb fee T4 has a 50 sat/vb fee If the miner was the aggregator they would aggregate (T1 + T2) and mine it to get the highest fee. Instead an aggregator who is not a miner could collect a fee from the creator of T3 and T4 and aggregate (T1 + T3) and (T2 + T4) thereby raising the average fee of T3 and T4. The miner loses out on fees. Approach two makes this less of an issue because the creator of T1, if they are aware T2 exists, is unlikely to consent to having T1 aggregated with T3 since it lowers the total fee. This relay vs miner conflict isn't an entirely new issue in Bitcoin. Miners today could run relay nodes and keep the high fee transactions for themselves. I assume this isn't done very much in 2025 because the block subsidy still dominates, but is likely to be a bigger issue when fees dominate. On Mon, Apr 14, 2025 at 9:47 AM Pieter Wuille <bitcoin-dev@wuille.net> wrote: > > Hi Ethan, > > thank you bringing this up. I'm unconvinced about the practicality, but I'm happy to see thinking and discussion in this area. > > Two points addressed below: > > On Friday, April 4th, 2025 at 12:29 PM, Ethan Heilman <eth3rs@gmail.com> wrote: > > > If it is the case that we can > > handle these extra bytes without degrading performance or > > decentralization, then consider the head room we are giving up that > > could be used for scalability. > > I don't disagree with the overall point raised here, but I do think it's worth distinguishing between the "size" (bandwidth/storage) and "computation" (CPU/IO) aspects of scalability. > > If it turns out to be the case that PQ schemes need more on-chain size, but have lower per-byte computation cost, a reasonable argument could be made that a higher discount factor for PQ data is acceptable. I don't know what the trade-off here ought to be, and this does not diminish your "JPEG resistance" argument, but I did want to point out that just counting size isn't the only constraint here. > > > Such a system would present scaling issues for the mempool because > > prior to aggregation and compression, these transactions would be 2kb > > to 100kb in size and there would be a lot more of them. It is likely > > parties producing large numbers of transactions would want to > > pre-aggregate and compress them in one big many input, many output > > transactions. Aggregating prior to the miner may have privacy benefits > > but also scalability benefits as it would enable cut-throughs and very > > cheap consolidation transactions. ~87/txns a second does not include > > these additional scalability benefits. > > I don't think pre-aggregation (beyond a single-transaction-wide one) is realistic, as it effectively breaks in-mempool transaction replacement, turning every pre-aggregated group of transactions that is being relayed together into an atomic package that must be taken or not as a whole. Consider for example the case where transactions P, C1, and C2 are relayed, with C1 and C2 depending on P. One node sees P and C1, but not C2, they may pre-aggregate prior to relay. Another node sees P and C2, but not C1, they may pre-aggregate those prior to relay. These two packages (P+C1, P+C2) cannot be combined, so we've effectively forced the network/miners to choose between one of C1 or C2, unless the individual transactions are still available somewhere. > > I fear this is a very fast way to cause mining without direct-to-miner transaction submission from users to become uncompetitive, making entering the mining business permissioned, and effectively removing the point of having a decentralized consensus mechanism in the first place. > > -- > Pieter > -- You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group. To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/CAEM%3Dy%2BVK2VwoTc3VbHFbARm9no6qJivrug%2BLPuGy_m8%2BPFELOA%40mail.gmail.com. ^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2025-04-14 20:49 UTC | newest] Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2025-04-04 16:29 [bitcoindev] Post Quantum Signatures and Scaling Bitcoin Ethan Heilman 2025-04-04 17:17 ` Dustin Ray 2025-04-05 20:40 ` 'Eli Ben-Sasson' via Bitcoin Development Mailing List 2025-04-04 18:43 ` Brandon Black 2025-04-04 19:22 ` Ethan Heilman 2025-04-05 17:39 ` Matt Corallo 2025-04-14 13:47 ` Pieter Wuille 2025-04-14 19:35 ` Ethan Heilman
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox