From mboxrd@z Thu Jan 1 00:00:00 1970 Delivery-date: Wed, 25 Sep 2024 05:45:28 -0700 Received: from mail-yb1-f188.google.com ([209.85.219.188]) by mail.fairlystable.org with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (Exim 4.94.2) (envelope-from ) id 1stROW-0001sq-K9 for bitcoindev@gnusha.org; Wed, 25 Sep 2024 05:45:28 -0700 Received: by mail-yb1-f188.google.com with SMTP id 3f1490d57ef6-e1aa529e30esf10576502276.1 for ; Wed, 25 Sep 2024 05:45:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlegroups.com; s=20230601; t=1727268319; x=1727873119; darn=gnusha.org; h=list-unsubscribe:list-subscribe:list-archive:list-help:list-post :list-id:mailing-list:precedence:x-original-sender:mime-version :subject:references:in-reply-to:message-id:to:from:date:sender:from :to:cc:subject:date:message-id:reply-to; bh=HINARdajBDhh0J1F98PC2pe/UTiylW85I+ODLBbcVSk=; b=j3R/aSTu/qdXFGClM96M5l1BUojn3l6pxJ68adXMpHGgcen2+fU+0ZcLwbAF5t1OSt 5FlfSPqOFSN1Yzb5ZeQVnus4/4pgp29Bs8hfTwb58i108E3eIoAlTlmLv/sbKx+4hGS+ nOlGs8pF/LonnqhNvfjBSr1S+ESHlNR5u2/WM04Q22U6dEbzQZv7e9KIWpaJnp9GsA5G SU0Bl8NdM6U6C+fBI8sxMqg+qgzB8A9XTGs6JmgWHOen7QkS0+CFcilrSAGJM/7d4fmy 98DQZVgxq0oIrRTNCen8L03qqlvEDAuoRajpQ0LVfA2Hr9Gs/Dy85KBXIbtbmoombqBs I5IA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1727268319; x=1727873119; h=list-unsubscribe:list-subscribe:list-archive:list-help:list-post :list-id:mailing-list:precedence:x-original-sender:mime-version :subject:references:in-reply-to:message-id:to:from:date:x-beenthere :x-gm-message-state:sender:from:to:cc:subject:date:message-id :reply-to; bh=HINARdajBDhh0J1F98PC2pe/UTiylW85I+ODLBbcVSk=; b=d+QS3s37FGG+5+TJHyYRFHwiwu14VGBbQO067nS2Ysv2YrQ4r9D6GJkPQ0sgoFcaJW Ata3fH3SAP+AhTJUq9q/QH2vM6yGRV0X4LaaUSIQEu4r5HVxDarAjVm4bCDNqcA0goqD OwS+rnmw5hD+iMXr6whyUzGJ3ciPCLHCkE2IsL2CKszAL5/Y7ArCSMgWq7n3TUvrlTvI NKBsYAM2fctQZKthDRvXYVg3EyJTPw4isWxt7ThQaagcXbvRB+PXZNCO4gnzwMQirQb/ R/RmBjsKRbCv8PVOUuFd6LyAGgq7cKco6FQn1zgZFlBGLocC2zcZx1mKC1WD3uIm+uF+ lOjQ== Sender: bitcoindev@googlegroups.com X-Forwarded-Encrypted: i=1; AJvYcCWtZ59fNX1ShbHmWnsPYdkmqAgzBlkPsUffwkISPprN30z3FPps1WCHivTE0/FR7fNAsWtYvjJVfj8L@gnusha.org X-Gm-Message-State: AOJu0YyhlCXqcQHsq9sCgp+AFQfyITnlG3wdMya9crWgaGyl0N2Ffbtp lJ2xHNC+EZqDzSKnr66hskFh0mmBdtycVC9HqP9/9sI7ohkJ3rXT X-Google-Smtp-Source: AGHT+IHbF6BYYWlBc6WoLUd9jzwspOxmGlbuBw3slUX7dXmEQIgxxGrt2QeUG9KnW81Gpctzya9ygw== X-Received: by 2002:a05:6902:1202:b0:e17:d0a9:56e8 with SMTP id 3f1490d57ef6-e24d84b6606mr1563661276.28.1727268318627; Wed, 25 Sep 2024 05:45:18 -0700 (PDT) X-BeenThere: bitcoindev@googlegroups.com Received: by 2002:a05:6902:18c9:b0:e13:c6c4:161b with SMTP id 3f1490d57ef6-e20806cfdd2ls5199141276.0.-pod-prod-05-us; Wed, 25 Sep 2024 05:45:16 -0700 (PDT) X-Received: by 2002:a05:690c:4a91:b0:6dd:c77a:1641 with SMTP id 00721157ae682-6e21d707e2cmr16126307b3.12.1727268316523; Wed, 25 Sep 2024 05:45:16 -0700 (PDT) Received: by 2002:a81:ad0c:0:b0:6e2:1e5e:a1e1 with SMTP id 00721157ae682-6e21f46d647ms7b3; Wed, 25 Sep 2024 05:04:45 -0700 (PDT) X-Received: by 2002:a05:690c:dc2:b0:6e2:2b74:c0 with SMTP id 00721157ae682-6e22b7408a6mr2025687b3.6.1727265883761; Wed, 25 Sep 2024 05:04:43 -0700 (PDT) Date: Wed, 25 Sep 2024 05:04:43 -0700 (PDT) From: Hunter Beast To: Bitcoin Development Mailing List Message-Id: <5d43fbd6-723d-4d3d-bc35-427c36a4a06an@googlegroups.com> In-Reply-To: <264e0340-ddfa-411c-a755-948399400b08n@googlegroups.com> References: <62fd28ab-e8b5-4cfc-b5ae-0d5a033af057n@googlegroups.com> <87b4e402-39d8-46b0-8269-4f81fa501627n@googlegroups.com> <2cbd432f-ca19-4481-93c5-3b0f7cdea1cb@DS3018xs> <1b86f467-95e5-4558-98bc-b921dd29e1afn@googlegroups.com> <264e0340-ddfa-411c-a755-948399400b08n@googlegroups.com> Subject: Re: [bitcoindev] Re: Proposing a P2QRH BIP towards a quantum resistant soft fork MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_Part_189168_472488236.1727265883481" X-Original-Sender: hunter@surmount.systems Precedence: list Mailing-list: list bitcoindev@googlegroups.com; contact bitcoindev+owners@googlegroups.com List-ID: X-Google-Group-Id: 786775582512 List-Post: , List-Help: , List-Archive: , List-Unsubscribe: , X-Spam-Score: -0.7 (/) ------=_Part_189168_472488236.1727265883481 Content-Type: multipart/alternative; boundary="----=_Part_189169_163916722.1727265883481" ------=_Part_189169_163916722.1727265883481 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Thanks for the response as always, Antoine, and I've made several=20 substantial updates to the BIP in case you'd like to give it another=20 once-over. I'm going to submit P2QRH to bips soon. On Thursday, August 22, 2024 at 1:29:09=E2=80=AFAM UTC-6 Antoine Riard wrot= e: Hello Hunter, > Well, it's also important to remember that for every qubit added, it=20 doubles the power of the system. A 2,000 qubit cryptographically-relevant= =20 quantum computer (CRQC) is exponentially faster than a 1,000 qubit one.=20 There's also the > capability for cross-links for multiple chips to=20 communicate with each other, which IBM is also researching. The IBM Quantum= =20 System Two can be upgraded to support 16,000 qubits according to their=20 marketing. Also consider that the ve> rification of the results from the=20 CRQC can be done via classical computer, so a high level of error=20 correction might not be as necessary so long as the program is run enough= =20 times. It will take much longer, of course. On performance, once again I think it all depends on the quantum computer= =20 architecture considered and if we're talking about physical qubits /=20 logical qubits. As the paper "The impact of hardware specifications on=20 reaching quantum advantage in the fault tolerant regime" linked in your BIP= =20 judiciously observe in its introduction that surface code (as used by IBM)= =20 is only one of the error code correction technique. About cross-links for multiple chips, even if each chip parallelize towards= =20 a single classical logical unit, ordering computational units is a=20 notoriously hard issue in classical computer. I don't think there is any=20 certainty in quantum computer development that each set of qubits of=20 isolated chips can be arithmetically additioned without a coefficient loss= =20 on the resulting sum (...there is always a bit of apprehension to have to= =20 dissociate between marketing claims and academic claim duly=20 peer-reviewed...). And while indeed, the results can be evaluated via a=20 classical computer, this doesn't mean transitively that the evaluation will= =20 be as efficient (in energy / computational cycles) rather than doing more= =20 error correction on the quantum computer side. After looking into it more, I believe you are correct. Qubit count=20 determines a lot of things, but not necessarily the "power", there's many,= =20 many factors that go into that, which you've outlined. =20 > I've decided in one of my more recent updates to the BIP to default to=20 the highest level of NIST security, NIST V, which provides 256 bits of=20 security. You can see my rationale for that in this PR: > https://github.com/cryptoquick/bips/pull/7/files Those are assumptions there is a security increase by scaling up the size= =20 of the public key. In the Bitcoin world, we don't even make assumption on= =20 the public key size for ECDSA signature scheme as both compressed and uncompressed public keys= =20 have been historically valid. Similarly, the public key size does not have= =20 to be bundled with the specification of the signature verification scheme itself (e.g see=20 BIP340 discussion on x-only public keys). According to the spec, I was hoping to distinguish between post-quantum=20 algorithms by their key size. If there's a collision, a distinguishing byte= =20 could be added for the new algorithm. Then they're identified by their=20 PUSHDATA opcode. That's the primary reason they're specified. =20 > As such, you'll see FALCON is roughly 4x larger than SQIsign signatures.= =20 Although supersingular elliptic curve quaternion isogeny-based algorithms= =20 are newer and > more experimental than lattice-based cryptography, I think the benefits= =20 outweigh the risks, especially when transaction throughput is a principal= =20 concern. =20 There are no public key size in the security table so it's hard to compare= =20 the overall on-chain space cost for each signature post-quantum algorithm= =20 considered. Neither actually, there is an estimation of the verification cost for an=20 average 200-bytes transactions, old good's Hamilton's quaternion and=20 relying on complex numbers, which can be hard to deal with for the hobbyist= =20 CPUs can be a concern. I've updated the table to reflect the key size concern. For verification=20 cost, I've found it's difficult to compare numbers provided by the=20 different papers. Some provide cycles, some provide durations. I do want to= =20 include a benchmark in the test vectors once they're ready. =20 > It's crucial that the signature and public key both receive the witness= =20 discount. Can you go into more detail in how that might be accomplished? The BIP341 taproot annex could be used for that, see=20 https://github.com/bitcoin/bips/blob/master/bip-0341.mediawiki#cite_note-5 I've adjusted the BIP for this to integrate with Taproot. The primary=20 difference is that this will use a hash of the Taproot public keys in the= =20 v3 spend script. =20 > Although it's too early to talk about activation of a QuBit soft fork,=20 I've put some thought into how we can maintain the existing Bitcoin=20 throughput with a soft fork, and I think it might be prudent to, when the= =20 time comes, introdu> ce a 4x additional QuBit witness discount, maybe we=20 call it the quitness, which is only available to valid P2QRH signatures.=20 This would preclude its abuse for things like inscriptions because the=20 signature data would need to corresp> ond to the key, and even if this were= =20 possible, it's likely to result in only a burner address. This would=20 increase chain state growth from roughly 100GB/yr to possibly closer to=20 2-300GB, depending on adoption. As the state of the a> rt of SSD technology= =20 advances, this should allow plebs to run their own node on a 4TB disk for= =20 over a decade, even including existing chain size of ~600GB. The annex could have typed fields for post-quantum signature and public key= =20 further witness discount. However, I think it's a bit naive to assume that= =20 SSD technology advances will stay linear and that it will be economically= =20 accessible at the same pace to the tens of thousands of plebs actually=20 running full-nodes and constituting the skeleton of the base-relay network.= =20 One could play out a posteriori the predictions on bandwidth technological= =20 advances that have been made in BIP103 to see how well they held on the=20 last ~9 years. According to the C program in BIP-101, it looks like the block size would= =20 have increased by nearly 4x over the past ~9 years. I've specified in the= =20 BIP a separate witness, I call the quitness, that will solely receive the= =20 additional 4x discount. Schnorr signatures are still kept in the witness. =20 (There is another caution with evaluating technological advances, namely=20 that some hardware components could be actually massively consumed by other= =20 cryptocurrencies for their consensus algorithms...) > If we were to use the same approach for FALCON signatures, a 16x discount= =20 would be needed, and I think that's far too much for the community to=20 accept. As for pub key size and verification > time, these are secondary considerations if the primary constraint is=20 maintaining present transaction throughput. That's what makes SQIsign so=20 promising. Well, if there is something like the annex with typed fields each type of= =20 post-quantum signature could get a wider discount, especially if there are= =20 verification asymmetries favoring some scheme over another one, even if the= =20 security properties are differing. As you know, Bitcoin doesn't charge based on the complexity of how long it= =20 takes to run script, so it would make sense to charge based only upon byte= =20 count. If runtime is a major concern, and it is desired by the community,= =20 it can be proposed as a separate BIP, and potentially included in a QuBit= =20 soft fork. =20 > The Impact paper seems to dismiss Grover's algorithm, but I think it's=20 important to err on the size of caution and instead use a 32-byte double=20 SHA-2 (HASH256) for additional security in the P2QRH output. Performance-wise, this doesn't shock me to use a double SHA-2 (HASH256) as= =20 it has been added for many domain separation tagged hash in taproot. About Grover's algorithm, it's more the sample space and collision space=20 that should be more defined to be relevant, you can always downgrade the=20 performance of the Grover's algorithm by scaling up the sample space,=20 however it's not sure it's practical for bitcoin transaction generation. That's good. Additionally, because Grover's algorithm scales so poorly=20 compared to Shor's, I think it's a safe security assumption that HASH256=20 will be more secure for use in the v3 spend script. =20 > I'm not sure I understand what you mean by this... > Is your coin scarcity comment related to what I call "satoshi's shield"= =20 in the BIP? Not at all the "satoshi's shield" as you're describing in the BIP. This is just the observation that bitcoin coins are scarce in the sense=20 that you need to burn raw energy to acquire the rewards according to the=20 issuance schedule (or miners fees). Bitcoin script can be designed to=20 request that a sufficient number of bitcoin coins, or satoshis, are burned= =20 before to unlock a coin locked under a quantum-frail scriptpubkey. That means any quantum computer attacker, even if they have an efficient=20 quantum computer, might not be able to break the redeem script itself, only= =20 the signatures composing the redeem script check sig operations. Let's give a concrete example, let's say you have the following pseudo=20 script: < <998> =20 > Interpeted the following script should request from the spending party,=20 whatever it is to provide a witness stack of length 998 bytes, all dummy=20 elements. Those dummy elements are putting the burden on the quantum computer=20 attacker to burn fees at the current sat per vbyte rate to realize a=20 quantum exploit. (There could leverage SIGHASH_NONE to escape this "fee jail"... however it= =20 sounds to expose them to be overrided by a miner). So assuming this defensive scheme in face of quantum exploit is sound, I=20 think this put the burden of a quantum attacker to have hashrate=20 capabilities at the current level of difficulty, not solely an efficient=20 CRQC. I'm not sure I understand the point you're making, but only valid public=20 key / signature pairs in the quitness will be considered valid. =20 > Yes, this makes more sense. I'm not sure anything can be done with the=20 fraud proofs, but they could at least prove that a bad actor is present.=20 Ideally both approaches are combined for maximum security and=20 accountability. No KYC is necessarily hurting mining pools as there is no single kyc=20 definition that you can implement that do not open the door for a kind of= =20 DoS exploitation. This is not an issue to build a practical fraud proofs systems on seen=20 transaction, the open question is more if the average bitcoin user would=20 pay to download fraud proofs demonstrating that a given miner is not=20 engaging in quantum exploit. Makes sense. =20 > I've taken Antoine's feedback to heart and added FALCON to the=20 specification, including a section that addresses the increased maintenance= =20 burden of adding two distinct post-quantum cryptosystems. Thanks you for the addition, for the maintenance burden there is always the= =20 counter-argument to be made that you can secure a coins under multiple=20 post-quantun signature scheme, especially if they're from different=20 hardness assumptions breed. If one of the two scheme is secure, the coins= =20 are still locked by the other half. You'll see I've taken this feedback to heart and specified hybrid=20 cryptography in the BIP. =20 I think it could be interesting to split the BIP in multiple ones, one for= =20 the general consensus mechanism introducing a P2QRH with all quantum risks= =20 considerations, and an individual one for each signature algorithm that=20 could be deployed udner this generic P2QRH. Kinda in the same way, that=20 BIP340 / BIP341 are split. You might be right about that. I'd still like to specify FALCON for the=20 first one, but additional signature algorithms can get their own BIPs. =20 Best, Antoine ots hash: b57e9fe0b3de603ca66be29b7f1ba04fa5b8bc516c1277114ab42ac9f8572e12 Let me know if there's any additional changes you would like me to make.=20 I'll be submitting the BIP upstream to the bips repo as a draft PR soon. Do= =20 you mind if I credit you in the Acknowledgements section? Thanks for all=20 the great feedback so far. Le jeudi 15 ao=C3=BBt 2024 =C3=A0 06:25:01 UTC+1, Hunter Beast a =C3=A9crit= : I've taken Antoine's feedback to heart and added FALCON to the=20 specification, including a section that addresses the increased maintenance= =20 burden of adding two distinct post-quantum cryptosystems. Please review. https://github.com/cryptoquick/bips/pull/9/files On Tuesday, August 6, 2024 at 11:50:35=E2=80=AFAM UTC-6 Hunter Beast wrote: That's alright, Antoine, it's been a busy month for me too. > So I think it's good to stay cool minded and I think my observation about= =20 talking of "super-exponential rate" as used in maaku old blog post does not > hold a lot of rigor to describe the advances in the field of quantum=20 computing. Note, also how IMB is a commercial entity that can have a lot of= =20 interests > in "pumping" the state of "quantum computing" to gather fundings (there= =20 is a historical anecdote among bitcoin OG circles about Vitalik trying to= =20 do an > ICO to build a quantum computer like 10 years ago, just to remember). Well, it's also important to remember that for every qubit added, it=20 doubles the power of the system. A 2,000 qubit cryptographically-relevant= =20 quantum computer (CRQC) is exponentially faster than a 1,000 qubit one.=20 There's also the capability for cross-links for multiple chips to=20 communicate with each other, which IBM is also researching. The IBM Quantum= =20 System Two can be upgraded to support 16,000 qubits according to their=20 marketing. Also consider that the verification of the results from the CRQC= =20 can be done via classical computer, so a high level of error correction=20 might not be as necessary so long as the program is run enough times. It=20 will take much longer, of course. > I think FALCON is what has the smallest pubkey + sig size for=20 hash-and-sign lattice-based schemes. So I think it's worth reworking the=20 BIP to see what has the smallest generation / validation time and pubkey += =20 size space for the main post-quantum scheme. At least for dilthium, falcon,= =20 sphincs+ and SQISign. For an hypothetical witness discount, a v2 P2QRH=20 could be always be moved in a very template annex tag / field. I've decided in one of my more recent updates to the BIP to default to the= =20 highest level of NIST security, NIST V, which provides 256 bits of=20 security. You can see my rationale for that in this PR: https://github.com/cryptoquick/bips/pull/7/files Then, referencing this table: https://github.com/cryptoquick/bips/blob/p2qrh/bip-p2qrh.mediawiki#security As such, you'll see FALCON is roughly 4x larger than SQIsign signatures.=20 Although supersingular elliptic curve quaternion isogeny-based algorithms= =20 are newer and more experimental than lattice-based cryptography, I think=20 the benefits outweigh the risks, especially when transaction throughput is= =20 a principal concern. It's crucial that the signature and public key both receive the witness=20 discount. Can you go into more detail in how that might be accomplished? Although it's too early to talk about activation of a QuBit soft fork, I've= =20 put some thought into how we can maintain the existing Bitcoin throughput= =20 with a soft fork, and I think it might be prudent to, when the time comes,= =20 introduce a 4x additional QuBit witness discount, maybe we call it the=20 quitness, which is only available to valid P2QRH signatures. This would=20 preclude its abuse for things like inscriptions because the signature data= =20 would need to correspond to the key, and even if this were possible, it's= =20 likely to result in only a burner address. This would increase chain state= =20 growth from roughly 100GB/yr to possibly closer to 2-300GB, depending on=20 adoption. As the state of the art of SSD technology advances, this should= =20 allow plebs to run their own node on a 4TB disk for over a decade, even=20 including existing chain size of ~600GB. If we were to use the same approach for FALCON signatures, a 16x discount= =20 would be needed, and I think that's far too much for the community to=20 accept. As for pub key size and verification time, these are secondary=20 considerations if the primary constraint is maintaining present transaction= =20 throughput. That's what makes SQIsign so promising. > See literature on quantum attacks on bitcoin in the reference of the=20 paper you quote ("The impact of hardware specifications on reaching quantum= =20 advantage in the fault tolerant regime") for a discussion on Grover's=20 search algorithm. The Impact paper seems to dismiss Grover's algorithm, but I think it's=20 important to err on the size of caution and instead use a 32-byte double=20 SHA-2 (HASH256) for additional security in the P2QRH output. > Namely you can introduce an artifical "witness-stack size scale ladder"= =20 in pseudo-bitcoin script: OP_SIZE <1000> OP_EQUALVERIFY OP_DROP=20 ...checksig... > I have not verified it works well on bitcoin core though this script=20 should put the burden on the quantum attacker to have enough bitcoin amount= =20 available to burn in on-chain fees in witness size to break a P2WPKH. I'm not sure I understand what you mean by this... Is your coin scarcity comment related to what I call "satoshi's shield" in= =20 the BIP? > The technical issue if you implement KYC for a mining pool you're=20 increasing your DoS surface and this could be exploited by competing=20 miners. A more reasonable security model can be to have miner coinbase=20 pubkeys being used to commit to the "seen-in-mempool" spends and from then= =20 build "hand wawy" fraud proofs that a miner is quantum attacking you're=20 P2WSH spends at pubkey reveal time during transaction relay. Yes, this makes more sense. I'm not sure anything can be done with the=20 fraud proofs, but they could at least prove that a bad actor is present.=20 Ideally both approaches are combined for maximum security and=20 accountability. Thanks for your time! On Friday, July 12, 2024 at 7:44:27=E2=80=AFPM UTC-6 Antoine Riard wrote: Hi Hunter Beast, Apologies for the delay in answer. > I was thinking of focusing on the IBM Quantum System Two, mention how it= =20 can be scaled, and that although it might be quite limited, if running=20 Shor's variant for a > sufficient amount of time, above a certain minimum= =20 threshold of qubits, it might be capable of decrypting the key to an=20 address within one year. I base this on the estimate > provided in a study= =20 by the Sussex Centre for Quantum Technologies, et. al [1]. They provide two= =20 figures, 317M qubits to decrypt in one hour, 13M qubits to decrypt in one >= =20 day. It would seem it scales roughly linearly, and so extrapolating it=20 further, 36,000 qubits would be needed to decrypt an address within one=20 year. However, the IBM Heron > QPU turned out to have a gate time 100x less= =20 than was estimated in 2022, and so it might be possible to make do with=20 even fewer qubits still within that timeframe. With > only 360 qubits,=20 barring algorithmic overhead such as for circuit memory, it might be=20 possible to decrypt a single address within a year. That might sound like a= =20 lot, but > being able to accomplish that at all would be significant,=20 almost like a Chicago Pile moment, proving something in practice that was= =20 previously only thought theoretically > possible for the past 3 decades.=20 And it's only downhill from there... Briefly surveying the paper "The impact of hardware specifications on=20 reaching quantum advantage in the fault tolerant regime", I think it's a=20 reasonble framework to evaluate the practical efficiency of quantum attacks on bitcoin, it's self=20 consistent and there is a critical approach referencing the usual=20 litterature on quantum attacks on bitcoin. Just note the caveat, one can find in usual quantum complexity litterature,=20 "particularly in regard to end-to-end physical resource estimation. There= =20 are many other error correction techniques available, and the best choice will likely depend on the=20 underlying architecture's characteristics, such as the available physical= =20 qubit=E2=80=93qubit connectivity" (verbatim). Namely, evaluating quantum at= tacks is=20 very dependent on the concrete physical architecture underpinning it. All that said, I agree with you that if you see a quantum computer with the= =20 range of 1000 physical qubits being able to break the DLP for ECC based=20 encryption like secp256k1, even if it takes a year it will be a Chicago=20 Pile moment, or whatever comparative experiments which were happening about= =20 chain of nuclear reactions in 30s / 40s. > I think it's time to revisit these discussions given IBM's progress.=20 They've published a two videos in particular that are worth watching; their= =20 keynote from December of last > year [2], and their roadmap update from=20 just last month [3] I have looked on the roadmap as it's available on the IBM blog post:=20 https://www.ibm.com/quantum/blog/quantum-roadmap-2033#mark-roadmap-out-to-2= 033 They give only a target of 2000 logical qubit to be reach in 2033...which= =20 is surprisingly not that strong...And one expect they might hit likely soli= d state issues in laying out in hardware the Heron processor architecture. As= =20 a point of thinking, it took like 2 decades to advance on the state of art of litography in traditional chips manufacturing. =20 So I think it's good to stay cool minded and I think my observation about= =20 talking of "super-exponential rate" as used in maaku old blog post does not hold a lot of rigor to describe the advances in the field of quantum=20 computing. Note, also how IMB is a commercial entity that can have a lot of= =20 interests in "pumping" the state of "quantum computing" to gather fundings (there is= =20 a historical anecdote among bitcoin OG circles about Vitalik trying to do a= n ICO to build a quantum computer like 10 years ago, just to remember). > I'm supportive of this consideration. FALCON might be a good substitute,= =20 and maybe it can be upgraded to HAWK for even better performance depending= =20 on how much > time there is. According to the BIP, FALCON signatures are=20 ~10x larger t> han Schnorr signatures, so this will of course make the=20 transaction more expensive, but we also > must remember, these signatures= =20 will be going into the witness, which already receives a 4x discount.=20 Perhaps the discount could be incr> eased further someday to fit > more=20 transactions into blocks, but this will also likely result in more=20 inscriptions filling unused space also, which permanently increases the=20 burden of running an archive > node. Due to the controversy s> uch a change= =20 could bring, I would rather any increases in the witness discount be=20 excluded from future activation discussions, so as to be > considered=20 separately, even if it pertains to an increase in P2QRH transaction size. =20 > Do you think it's worth reworking the BIP to use FALCON signatures? I've= =20 only done a deep dive into SQIsign and SPHINCS+, and I will acknowledge the= =20 readiness levels between those two are presently worlds apart. I think FALCON is what has the smallest pubkey + sig size for hash-and-sign= =20 lattice-based schemes. So I think it's worth reworking the BIP to see what= =20 has the smallest generation / validation time and pubkey + size space for= =20 the main post-quantum scheme. At least for dilthium, falcon, sphincs+ and= =20 SQISign. For an hypothetical witness discount, a v2 P2QRH could be always= =20 be moved in a very template annex tag / field. > Also, do you think it's of any concern to use HASH160 instead of HASH256= =20 in the output script? I think it's fine for a cryptographic commitment=20 since it's simply a hash of a hash (MD160 of SHA-256). See literature on quantum attacks on bitcoin in the reference of the paper= =20 you quote ("The impact of hardware specifications on reaching quantum=20 advantage in the fault tolerant regime") for a discussion on Grover's=20 search algorithm. > I'm not sure I fully understand this, but even more practically, as=20 mentioned in the BIP, value can simply be kept in P2WPKH outputs, ideally= =20 with a value of fewer than 50 > coins per address, and when funds ever need to be spent, the>=20 transaction is signed and submitted out of band to a trusted mining pool,= =20 ideally one that does KYC, so it's > known which individual miners get to see the public key before it's=20 mined. It's not perfect, since this relies on exogenou> s security=20 assumptions, which is why P2QRH is > proposed. Again, the paper you're referencing ("The impact of hardware specifications= =20 on reaching quantum advantage...") is analyzing the performance of quantum= =20 advantage under 2 dimensions, namely space and time. My observation is in Bitcoin we have= =20 an additional dimension, "coin scarcity" that can be leveraged to build=20 defense of address spends in face of quantum attacks. Namely you can introduce an artifical "witness-stack size scale ladder" in= =20 pseudo-bitcoin script: OP_SIZE <1000> OP_EQUALVERIFY OP_DROP ...checksig... I have not verified it works well on bitcoin core though this script should= =20 put the burden on the quantum attacker to have enough bitcoin amount=20 available to burn in on-chain fees in witness size to break a P2WPKH. > ideally with a value of fewer than 50 coins per address, and when funds= =20 ever need to be spent, the transaction is signed and submitted out of band= =20 to a trusted mining pool, ideally > one that does KYC, so it's known which individual > miners get to see the= =20 public key before it's mined. It's not perfect, since this relies on=20 exogenous security assumptions, which is > why P2QRH is proposed. The technical issue if you implement KYC for a mining pool you're=20 increasing your DoS surface and this could be exploited by competing=20 miners. A more reasonable security model can be to have miner coinbase=20 pubkeys being used to commit to the "seen-in-mempool" spends and from then= =20 build "hand wawy" fraud proofs that a miner is quantum attacking you're=20 P2WSH spends at pubkey reveal time during transaction relay. Best, Antoine ots hash: 1ad818955bbf0c5468847c00c2974ddb5cf609d630523622bfdb27f1f0dc0b30 Le lundi 17 juin 2024 =C3=A0 23:25:25 UTC+1, hunter a =C3=A9crit : -----BEGIN PGP SIGNED MESSAGE-----=20 Hash: SHA256=20 On 2024-06-16 19:31, Antoine Riard wrote:=20 >=20 > Hi Hunter Beast,I think any post-quantum upgrade signature algorithm=20 upgrade proposal would grandly benefit to haveShor's based practical=20 attacks far more defined in the Bitcoin context. As soon you start to talk= =20 aboutquantum computers there is no such thing as a "quantum computer"=20 though a wide array of architecturesbased on a range of technologies to=20 encode qubits on nanoscale physical properties.=20 >=20 Good point. I can write a section in the BIP Motivation or Security section= =20 about how an attack might take place practically, and the potential urgency= =20 of such an attack.=20 =20 I was thinking of focusing on the IBM Quantum System Two, mention how it=20 can be scaled, and that although it might be quite limited, if running=20 Shor's variant for a sufficient amount of time, above a certain minimum=20 threshold of qubits, it might be capable of decrypting the key to an=20 address within one year. I base this on the estimate provided in a study by= =20 the Sussex Centre for Quantum Technologies, et. al [1]. They provide two=20 figures, 317M qubits to decrypt in one hour, 13M qubits to decrypt in one= =20 day. It would seem it scales roughly linearly, and so extrapolating it=20 further, 36,000 qubits would be needed to decrypt an address within one=20 year. However, the IBM Heron QPU turned out to have a gate time 100x less= =20 than was estimated in 2022, and so it might be possible to make do with=20 even fewer qubits still within that timeframe. With only 360 qubits,=20 barring algorithmic overhead such as for circuit memory, it might be=20 possible to decrypt a single address within a year. That might sound like a= =20 lot, but being able to accomplish that at all would be significant, almost= =20 like a Chicago Pile moment, proving something in practice that was=20 previously only thought theoretically possible for the past 3 decades. And= =20 it's only downhill from there...=20 >=20 > This is not certain that any Shor's algorithm variant works smoothly=20 independently of the quantum computerarchitecture considered (e.g gate=20 frequency, gate infidelity, cooling energy consumption) and I think it'san= =20 interesting open game-theory problem if you can concentrate a sufficiant=20 amount of energy before anycoin owner moves them in consequence (e.g seeing= =20 a quantum break in the mempool and reacting with a counter-spend).=20 >=20 It should be noted that P2PK keys still hold millions of bitcoin, and those= =20 encode the entire public key for everyone to see for all time. Thus, early= =20 QC attacks won't need to consider the complexities of the mempool.=20 >=20 > In my opinion, one of the last time the subject was addressed on the=20 mailing list, the description of the state of the quantum computer field=20 was not realistic and get into risk characterization hyperbole talking=20 about "super-exponential rate" (when indeed there is no empirical=20 realization that distinct theoretical advance on quantum capabilities can= =20 be combined with each other) [1].=20 >=20 I think it's time to revisit these discussions given IBM's progress.=20 They've published a two videos in particular that are worth watching; their= =20 keynote from December of last year [2], and their roadmap update from just= =20 last month [3].=20 >=20 > On your proposal, there is an immediate observation which comes to mind,= =20 namely why not using one of the algorithm(dilthium, sphincs+, falcon) which= =20 has been through the 3 rounds of NIST cryptanalysis. Apart of the signature= =20 size,which sounds to be smaller, in a network of full-nodes any PQ=20 signature algorithm should have reasonable verificationperformances.=20 >=20 I'm supportive of this consideration. FALCON might be a good substitute,=20 and maybe it can be upgraded to HAWK for even better performance depending= =20 on how much time there is. According to the BIP, FALCON signatures are ~10x= =20 larger than Schnorr signatures, so this will of course make the transaction= =20 more expensive, but we also must remember, these signatures will be going= =20 into the witness, which already receives a 4x discount. Perhaps the=20 discount could be increased further someday to fit more transactions into= =20 blocks, but this will also likely result in more inscriptions filling=20 unused space also, which permanently increases the burden of running an=20 archive node. Due to the controversy such a change could bring, I would=20 rather any increases in the witness discount be excluded from future=20 activation discussions, so as to be considered separately, even if it=20 pertains to an increase in P2QRH transaction size.=20 =20 Do you think it's worth reworking the BIP to use FALCON signatures? I've=20 only done a deep dive into SQIsign and SPHINCS+, and I will acknowledge the= =20 readiness levels between those two are presently worlds apart.=20 =20 Also, do you think it's of any concern to use HASH160 instead of HASH256 in= =20 the output script? I think it's fine for a cryptographic commitment since= =20 it's simply a hash of a hash (MD160 of SHA-256).=20 >=20 > Lastly, there is a practical defensive technique that can be implemented= =20 today by coin owners to protect in face ofhyptothetical quantum=20 adversaries. Namely setting spending scripts to request an artificially=20 inflated witness stack,as the cost has to be burden by the spender. I think= =20 one can easily do that with OP_DUP and OP_GREATERTHAN and a bitof stack=20 shuffling. While the efficiency of this technique is limited by the max=20 consensus size of the script stack(`MAX_STACK_SIZE`) and the max consensus= =20 size of stack element (`MAX_SCRIPT_ELEMENT_SIZE`), this adds an=20 additional"scarce coins" pre-requirement on the quantum adversarise to=20 succeed. Shor's algorithm is only defined under theclassic ressources of=20 computational complexity, time and space.=20 >=20 I'm not sure I fully understand this, but even more practically, as=20 mentioned in the BIP, value can simply be kept in P2WPKH outputs, ideally= =20 with a value of fewer than 50 coins per address, and when funds ever need= =20 to be spent, the transaction is signed and submitted out of band to a=20 trusted mining pool, ideally one that does KYC, so it's known which=20 individual miners get to see the public key before it's mined. It's not=20 perfect, since this relies on exogenous security assumptions, which is why= =20 P2QRH is proposed.=20 >=20 > Best,Antoine=20 > [1] https://freicoin.substack.com/p/why-im-against-taproot=20 >=20 =20 I'm grateful you took the time to review the BIP and offer your detailed=20 insights.=20 =20 [1] =E2=80=9CThe impact of hardware specifications on reaching quantum adva= ntage in=20 the fault tolerant regime,=E2=80=9D 2022 -=20 https://pubs.aip.org/avs/aqs/article/4/1/013801/2835275/The-impact-of-hardw= are-specifications-on-reaching=20 [2] https://www.youtube.com/watch?v=3DDe2IlWji8Ck=20 [3] https://www.youtube.com/watch?v=3Dd5aIx79OTps=20 =20 >=20 >=20 > Le vendredi 14 juin 2024 =C3=A0 15:30:54 UTC+1, Hunter Beast a =C3=A9crit= :=20 >=20 > > Good points. I like your suggestion for a SPHINCS+, just due to how=20 mature it is in comparison to SQIsign. It's already in its third round and= =20 has several standards-compliant implementations, and it has an actual=20 specification rather than just a research paper. One thing to consider is= =20 that NIST-I round 3 signatures are 982 bytes in size, according to what I= =20 was able to find in the documents hosted by the SPHINCS website.=20 > >=20 https://web.archive.org/web/20230711000109if_/http://sphincs.org/data/sphin= cs+-round3-submission-nist.zip=20 > > =20 > > One way to handle this is to introduce this as a separate address type= =20 than SQIsign. That won't require OP_CAT, and I do want to keep this soft=20 fork limited in scope. If SQIsign does become significantly broken, in this= =20 hopefully far future scenario, I might be supportive of an increase in the= =20 witness discount.=20 > > =20 > > Also, I've made some additional changes based on your feedback on X.=20 You can review them here if you so wish:=20 > >=20 https://github.com/cryptoquick/bips/pull/5/files?short_path=3D917a32a#diff-= 917a32a71b69bf62d7c85dfb13d520a0340a30a2889b015b82d36411ed45e754=20 > >=20 > >=20 > > On Friday, June 14, 2024 at 8:15:29=E2=80=AFAM UTC-6 Pierre-Luc Dallair= e-Demers=20 wrote:=20 > > > SQIsign is blockchain friendly but also very new, I would recommend= =20 adding a hash-based backup key in case an attack on SQIsign is found in the= =20 future (recall that SIDH broke over the span of a weekend=20 https://eprint.iacr.org/2022/975.pdf).=20 > > > Backup keys can be added in the form of a Merkle tree where one=20 branch would contain the SQIsign public key and the other the public key of= =20 the recovery hash-based scheme. For most transactions it would only add one= =20 bit to specify the SQIsign branch.=20 > > > The hash-based method could be Sphincs+, which is standardized by=20 NIST but requires adding extra code, or Lamport, which is not standardized= =20 but can be verified on-chain with OP-CAT.=20 > > >=20 > > > On Sunday, June 9, 2024 at 12:07:16=E2=80=AFp.m. UTC-4 Hunter Beast w= rote:=20 > > > > The motivation for this BIP is to provide a concrete proposal for= =20 adding quantum resistance to Bitcoin. We will need to pick a signature=20 algorithm, implement it, and have it ready in event of quantum emergency.= =20 There will be time to adopt it. Importantly, this first step is a more=20 substantive answer to those with concerns beyond, "quantum computers may=20 pose a threat, but we likely don't have to worry about that for a long=20 time". Bitcoin development and activation is slow, so it's important that= =20 those with low time preference start discussing this as a serious=20 possibility sooner rather than later. This is meant to be the first in a=20 series of BIPs regarding a hypothetical "QuBit" soft fork. The BIP is=20 intended to propose concrete solutions, even if they're early and=20 incomplete, so that Bitcoin developers are aware of the existence of these= =20 solutions and their potential. This is just a rough draft and not the=20 finished BIP. I'd like to validate the approach and hear if I should=20 continue working on it, whether serious changes are needed, or if this=20 truly isn't a worthwhile endeavor right now.=20 > > > > =20 > > > > The BIP can be found here:=20 > > > > https://github.com/cryptoquick/bips/blob/p2qrh/bip-p2qrh.mediawiki= =20 > > > > =20 > > > > Thank you for your time.=20 > > > > =20 > > > >=20 > > >=20 > > >=20 > >=20 > >=20 >=20 >=20 > -- You received this message because you are subscribed to a topic in the= =20 Google Groups "Bitcoin Development Mailing List" group. To unsubscribe from= =20 this topic, visit=20 https://groups.google.com/d/topic/bitcoindev/Aee8xKuIC2s/unsubscribe. To=20 unsubscribe from this group and all its topics, send an email to=20 bitcoindev+...@googlegroups.com. To view this discussion on the web visit= =20 https://groups.google.com/d/msgid/bitcoindev/87b4e402-39d8-46b0-8269-4f81fa= 501627n%40googlegroups.com.=20 -----BEGIN PGP SIGNATURE-----=20 Version: OpenPGP.js v4.10.3=20 Comment: https://openpgpjs.org=20 wsBcBAEBCAAGBQJmcJwuAAoJEDEPCKe+At0hjhkIAIdM7QN9hAO0z+KO7Bwe=20 JT45XyusJmDG1gJbLZtb+SfuE1X5PFDHNTLSNliJWsOImxFCiBPnlXhYQ4B/=20 8gST3rqplUwkdYr52E5uMxTTq9YaXTako4PNb8d7XfraIwDKXAJF+5Skf4f9=20 bQUYMieBAFSEXCmluirQymB+hUoaze60Whd07hhpzbGSwK4DdSXltufkyCDE=20 tJUforNWm8X25ABTSNDh3+if5V/wJuix/u8GJyMHKucaEAO01ki2oyusq2rt=20 Xe6ysUieclusFFdQAs4PfYxhzXTf5XeAbFga/qxrVtbt7q2nUkYklqteT2pp=20 mH/DU20HMBeGVSrISrvsmLw=3D=20 =3D+wat=20 -----END PGP SIGNATURE-----=20 --=20 You received this message because you are subscribed to the Google Groups "= Bitcoin Development Mailing List" group. To unsubscribe from this group and stop receiving emails from it, send an e= mail to bitcoindev+unsubscribe@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/= bitcoindev/5d43fbd6-723d-4d3d-bc35-427c36a4a06an%40googlegroups.com. ------=_Part_189169_163916722.1727265883481 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Thanks for the response as always, Antoine, and I've made several substanti= al updates to the BIP in case you'd like to give it another once-over. I'm = going to submit P2QRH to bips soon.

On Th= ursday, August 22, 2024 at 1:29:09=E2=80=AFAM UTC-6 Antoine Riard wrote:
Hello Hunter,

> = Well, it's also important to remember that for every qubit added, it double= s the power of the system. A 2,000 qubit cryptographically-relevant quantum= computer (CRQC) is exponentially faster than a 1,000 qubit one. There's al= so the > capability for cross-links for multiple chips to communicate wi= th each other, which IBM is also researching. The IBM Quantum System Two ca= n be upgraded to support 16,000 qubits according to their marketing. Also c= onsider that the ve> rification of the results from the CRQC can be done= via classical computer, so a high level of error correction might not be a= s necessary so long as the program is run enough times. It will take much l= onger, of course.

On performance, once again I think it all depe= nds on the quantum computer architecture considered and if we're talking ab= out physical qubits / logical qubits. As the paper "The impact of hardware = specifications on reaching quantum advantage in the fault tolerant regime" = linked in your BIP judiciously observe in its introduction that surface cod= e (as used by IBM) is only one of the error code correction technique.

About cross-links for multiple chips, even if each chip parallelize = towards a single classical logical unit, ordering computational units is a = notoriously hard issue in classical computer. I don't think there is any ce= rtainty in quantum computer development that each set of qubits of isolated= chips can be arithmetically additioned without a coefficient loss on the r= esulting sum (...there is always a bit of apprehension to have to dissociat= e between marketing claims and academic claim duly peer-reviewed...). And w= hile indeed, the results can be evaluated via a classical computer, this do= esn't mean transitively that the evaluation will be as efficient (in energy= / computational cycles) rather than doing more error correction on the qua= ntum computer side.

After looking i= nto it more, I believe you are correct. Qubit count determines a lot of thi= ngs, but not necessarily the "power", there's many, many factors that go in= to that, which you've outlined.
=C2=A0
> I've decided in one of my more recent updates to the BI= P to default to the highest level of NIST security, NIST V, which provides = 256 bits of security. You can see my rationale for that in this PR:
&g= t; https://github.com/cryptoquick/bips/pull/7/files

Those are assumptions there is a security increase by scaling = up the size of the public key. In the Bitcoin world, we don't even make ass= umption on the public key size
for ECDSA signature scheme as both comp= ressed and uncompressed public keys have been historically valid. Similarly= , the public key size does not have to be bundled with
the specificati= on of the signature verification scheme itself (e.g see BIP340 discussion o= n x-only public keys).

> As such, you'll see FALCON is roughly 4x la= rger than SQIsign signatures. Although supersingular elliptic curve quatern= ion isogeny-based algorithms are newer and
> more experimental than= lattice-based cryptography, I think the benefits outweigh the risks, espec= ially when transaction throughput is a principal concern.
=C2=A0
= There are no public key size in the security table so it's hard to compare = the overall on-chain space cost for each signature post-quantum algorithm c= onsidered.
Neither actually, there is an estimation of the verificatio= n cost for an average 200-bytes transactions, old good's Hamilton's quatern= ion and relying on complex numbers, which can be hard to deal with for the = hobbyist CPUs can be a concern.

I'v= e updated the table to reflect the key size concern. For verification cost,= I've found it's difficult to compare numbers provided by the different pap= ers. Some provide cycles, some provide durations. I do want to include a be= nchmark in the test vectors once they're ready.
=C2=A0
> It's crucial that the signature and pub= lic key both receive the witness discount. Can you go into more detail in h= ow that might be accomplished?

The BIP341 taproot annex could be= used for that, see
https://= github.com/bitcoin/bips/blob/master/bip-0341.mediawiki#cite_note-5

I've adjusted the BIP for this to integrate w= ith Taproot. The primary difference is that this will use a hash of the Tap= root public keys in the v3 spend script.
=C2=A0
> Although it's too early to talk about activati= on of a QuBit soft fork, I've put some thought into how we can maintain the= existing Bitcoin throughput with a soft fork, and I think it might be prud= ent to, when the time comes, introdu> ce a 4x additional QuBit witness d= iscount, maybe we call it the quitness, which is only available to valid P2= QRH signatures. This would preclude its abuse for things like inscriptions = because the signature data would need to corresp> ond to the key, and ev= en if this were possible, it's likely to result in only a burner address. T= his would increase chain state growth from roughly 100GB/yr to possibly clo= ser to 2-300GB, depending on adoption. As the state of the a> rt of SSD = technology advances, this should allow plebs to run their own node on a 4TB= disk for over a decade, even including existing chain size of ~600GB.

The annex could have typed fields for post-quantum signature and pub= lic key further witness discount. However, I think it's a bit naive to assu= me that SSD technology advances will stay linear and that it will be econom= ically accessible at the same pace to the tens of thousands of plebs actual= ly running full-nodes and constituting the skeleton of the base-relay netwo= rk. One could play out a posteriori the predictions on bandwidth technologi= cal advances that have been made in BIP103 to see how well they held on the= last ~9 years.

According to the C = program in BIP-101, it looks like the block size would have increased by ne= arly 4x over the past ~9 years. I've specified in the BIP a separate witnes= s, I call the quitness, that will solely receive the additional 4x discount= . Schnorr signatures are still kept in the witness.
=C2=A0
<= blockquote style=3D"margin: 0px 0px 0px 0.8ex; border-left: 1px solid rgb(2= 04, 204, 204); padding-left: 1ex;">(There is another caution with evaluatin= g technological advances, namely that some hardware components could be act= ually massively consumed by other cryptocurrencies for their consensus algo= rithms...)

> If we were to use the same approach for FALCON s= ignatures, a 16x discount would be needed, and I think that's far too much = for the community to accept. As for pub key size and verification
>= time, these are secondary considerations if the primary constraint is main= taining present transaction throughput. That's what makes SQIsign so promis= ing.

Well, if there is something like the annex with typed field= s each type of post-quantum signature could get a wider discount, especiall= y if there are verification asymmetries favoring some scheme over another o= ne, even if the security properties are differing.
<= br />
As you know, Bitcoin doesn't charge based on the complexity= of how long it takes to run script, so it would make sense to charge based= only upon byte count. If runtime is a major concern, and it is desired by = the community, it can be proposed as a separate BIP, and potentially includ= ed in a QuBit soft fork.
=C2=A0
> The Impact paper seems to dismiss Grover's algorithm, but I th= ink it's important to err on the size of caution and instead use a 32-byte = double SHA-2 (HASH256) for additional security in the P2QRH output.
Performance-wise, this doesn't shock me to use a double SHA-2 (HASH256)= as it has been added for many domain separation tagged hash in taproot.About Grover's algorithm, it's more the sample space and collision space= that should be more defined to be relevant, you can always downgrade the p= erformance of the Grover's algorithm by scaling up the sample space, howeve= r it's not sure it's practical for bitcoin transaction generation.

That's good. Additionally, because Grover's= algorithm scales so poorly compared to Shor's, I think it's a safe securit= y assumption that HASH256 will be more secure for use in the v3 spend scrip= t.
=C2=A0
> I'm not s= ure I understand what you mean by this...
> Is your coin scarcity c= omment related to what I call "satoshi's shield" in the BIP?

Not= at all the "satoshi's shield" as you're describing in the BIP.

= This is just the observation that bitcoin coins are scarce in the sense tha= t you need to burn raw energy to acquire the rewards according to the issua= nce schedule (or miners fees). Bitcoin script can be designed to request th= at a sufficient number of bitcoin coins, or satoshis, are burned before to = unlock a coin locked under a quantum-frail scriptpubkey.

That me= ans any quantum computer attacker, even if they have an efficient quantum c= omputer, might not be able to break the redeem script itself, only the sign= atures composing the redeem script check sig operations.

Let's g= ive a concrete example, let's say you have the following pseudo script:

=C2=A0 =C2=A0 =C2=A0 =C2=A0 <<OP_DEPTH> <OP_PUSHDATA2&g= t; <998> <OP_EQUALVERIFY> <pubkey> <OP_CHECKSIG>>= ;

Interpeted the following script should request from the spendi= ng party, whatever it is to provide a witness stack of length 998 bytes, al= l dummy elements.
Those dummy elements are putting the burden on the q= uantum computer attacker to burn fees at the current sat per vbyte rate to = realize a quantum exploit.
(There could leverage SIGHASH_NONE to escap= e this "fee jail"... however it sounds to expose them to be overrided by a = miner).

So assuming this defensive scheme in face of quantum exp= loit is sound, I think this put the burden of a quantum attacker to have ha= shrate capabilities at the current level of difficulty, not solely an effic= ient CRQC.

I'm not sure I understan= d the point you're making, but only valid public key / signature pairs in t= he quitness will be considered valid.
=C2=A0
> Yes, this makes more sense. I'm not sure anything= can be done with the fraud proofs, but they could at least prove that a ba= d actor is present. Ideally both approaches are combined for maximum securi= ty and accountability.

No KYC is necessarily hurting mining pool= s as there is no single kyc definition that you can implement that do not o= pen the door for a kind of DoS exploitation.

This is not an issu= e to build a practical fraud proofs systems on seen transaction, the open q= uestion is more if the average bitcoin user would pay to download fraud pro= ofs demonstrating that a given miner is not engaging in quantum exploit.

Makes sense.
=C2=A0
> I've taken Antoine's feedback to hea= rt and added FALCON to the specification, including a section that addresse= s the increased maintenance burden of adding two distinct post-quantum cryp= tosystems.

Thanks you for the addition, for the maintenance burd= en there is always the counter-argument to be made that you can secure a co= ins under multiple post-quantun signature scheme, especially if they're fro= m different hardness assumptions breed. If one of the two scheme is secure,= the coins are still locked by the other half.

You'll see I've taken this feedback to heart and specified hybr= id cryptography in the BIP.
=C2=A0
I think it could be interesting to split the BIP in multiple one= s, one for the general consensus mechanism introducing a P2QRH with all qua= ntum risks considerations, and an individual one for each signature algorit= hm that could be deployed udner this generic P2QRH. Kinda in the same way, = that BIP340 / BIP341 are split.

You= might be right about that. I'd still like to specify FALCON for the first = one, but additional signature algorithms can get their own BIPs.
= =C2=A0
Best,
Antoine
ots= hash: b57e9fe0b3de603ca66be29b7f1ba04fa5b8bc516c1277114ab42ac9f8572e12

Let me know if there's any additional = changes you would like me to make. I'll be submitting the BIP upstream to t= he bips repo as a draft PR soon. Do you mind if I credit you in the Acknowl= edgements section? Thanks for all the great feedback so far.

Le jeudi= 15 ao=C3=BBt 2024 =C3=A0 06:25:01 UTC+1, Hunter Beast a =C3=A9crit=C2=A0:<= br />
I've taken Antoine's feedbac= k to heart and added FALCON to the specification, including a section that = addresses the increased maintenance burden of adding two distinct post-quan= tum cryptosystems.
Please review.
On Tuesday, August 6, 2024 at 11:50:35=E2=80=AFAM UTC-6 Hunter = Beast wrote:
That's alright,= Antoine, it's been a busy month for me too.

> So I= think it's good to stay cool minded and I think my observation about talki= ng of "super-exponential rate" as used in maaku old blog post does not
> hold a lot of rigor to describe the advances in the field of quantum = computing. Note, also how IMB is a commercial entity that can have a lot of= interests
> in "pumping" the state of "quantum computing" to gathe= r fundings (there is a historical anecdote among bitcoin OG circles about V= italik trying to do an
> ICO to build a quantum computer like 10 ye= ars ago, just to remember).

Well, it's also impo= rtant to remember that for every qubit added, it doubles the power of the s= ystem. A 2,000 qubit cryptographically-relevant quantum computer (CRQC) is = exponentially faster than a 1,000 qubit one. There's also the capability fo= r cross-links for multiple chips to communicate with each other, which IBM = is also researching. The IBM Quantum System Two can be upgraded to support = 16,000 qubits according to their marketing. Also consider that the verifica= tion of the results from the CRQC can be done via classical computer, so a = high level of error correction might not be as necessary so long as the pro= gram is run enough times. It will take much longer, of course.
> I think FALCON is what has the smallest pubkey + sig si= ze for hash-and-sign lattice-based schemes. So I think it's worth reworking= the BIP to see what has the smallest generation / validation time and pubk= ey + size space for the main post-quantum scheme. At least for dilthium, fa= lcon, sphincs+ and SQISign. For an hypothetical witness discount, a v2 P2QR= H could be always be moved in a very template annex tag / field.
=
I've decided in one of my more recent updates to the BIP t= o default to the highest level of NIST security, NIST V, which provides 256= bits of security. You can see my rationale for that in this PR:
= https://github.com/cryptoquick/bips/pull/7/files
Then, referencing this table:
As such, you'll see FALCON is ro= ughly 4x larger than SQIsign signatures. Although supersingular elliptic cu= rve quaternion isogeny-based algorithms are newer and more experimental tha= n lattice-based cryptography, I think the benefits outweigh the risks, espe= cially when transaction throughput is a principal concern.

=
It's crucial that the signature and public key both receive the = witness discount. Can you go into more detail in how that might be accompli= shed?

Although it's too early to talk about acti= vation of a QuBit soft fork, I've put some thought into how we can maintain= the existing Bitcoin throughput with a soft fork, and I think it might be = prudent to, when the time comes, introduce a 4x additional QuBit witness di= scount, maybe we call it the quitness, which is only available to valid P2Q= RH signatures. This would preclude its abuse for things like inscriptions b= ecause the signature data would need to correspond to the key, and even if = this were possible, it's likely to result in only a burner address. This wo= uld increase chain state growth from roughly 100GB/yr to possibly closer to= 2-300GB, depending on adoption. As the state of the art of SSD technology = advances, this should allow plebs to run their own node on a 4TB disk for o= ver a decade, even including existing chain size of ~600GB.

If we were to use the same approach for FALCON signatures, a 16= x discount would be needed, and I think that's far too much for the communi= ty to accept. As for pub key size and verification time, these are secondar= y considerations if the primary constraint is maintaining present transacti= on throughput. That's what makes SQIsign so promising.

> See literature on quantum attacks on bitcoin in the reference o= f the paper you quote ("The impact of hardware specifications on reaching q= uantum advantage in the fault tolerant regime") for a discussion on Grover'= s search algorithm.

The Impact paper seems to di= smiss Grover's algorithm, but I think it's important to err on the size of = caution and instead use a 32-byte double SHA-2 (HASH256) for additional sec= urity in the P2QRH output.

> Namely you can i= ntroduce an artifical "witness-stack size scale ladder" in pseudo-bitcoin s= cript: OP_SIZE <1000> OP_EQUALVERIFY OP_DROP ...checksig...
> I have not verified it works well on bitcoin core though this script = should put the burden on the quantum attacker to have enough bitcoin amount= available to burn in on-chain fees in witness size to break a P2WPKH.

I'm not sure I understand what you mean by this...
Is your coin scarcity comment related to what I call "satoshi's sh= ield" in the BIP?

> The technical issue if yo= u implement KYC for a mining pool you're increasing your DoS surface and th= is could be exploited by competing miners. A more reasonable security model= can be to have miner coinbase pubkeys being used to commit to the "seen-in= -mempool" spends and from then build "hand wawy" fraud proofs that a miner = is quantum attacking you're P2WSH spends at pubkey reveal time during trans= action relay.

Yes, this makes more sense. I'm no= t sure anything can be done with the fraud proofs, but they could at least = prove that a bad actor is present. Ideally both approaches are combined for= maximum security and accountability.

Thanks for= your time!

On Friday, Ju= ly 12, 2024 at 7:44:27=E2=80=AFPM UTC-6 Antoine Riard wrote:
Hi Hunter Beast,

Apologies for = the delay in answer.

> I was thinking of focusing on the IBM = Quantum System Two, mention how it can be scaled, and that although it migh= t be quite limited, if running Shor's variant for a > sufficient amount = of time, above a certain minimum threshold of qubits, it might be capable o= f decrypting the key to an address within one year. I base this on the esti= mate > provided in a study by the Sussex Centre for Quantum Technologies= , et. al [1]. They provide two figures, 317M qubits to decrypt in one hour,= 13M qubits to decrypt in one > day. It would seem it scales roughly lin= early, and so extrapolating it further, 36,000 qubits would be needed to de= crypt an address within one year. However, the IBM Heron > QPU turned ou= t to have a gate time 100x less than was estimated in 2022, and so it might= be possible to make do with even fewer qubits still within that timeframe.= With > only 360 qubits, barring algorithmic overhead such as for circui= t memory, it might be possible to decrypt a single address within a year. T= hat might sound like a lot, but > being able to accomplish that at all w= ould be significant, almost like a Chicago Pile moment, proving something i= n practice that was previously only thought theoretically > possible for= the past 3 decades. And it's only downhill from there...

Briefl= y surveying the paper "The impact of hardware specifications on reaching qu= antum advantage in the fault tolerant regime", I think it's a reasonble fra= mework to evaluate
the practical efficiency of quantum attacks on bitc= oin, it's self consistent and there is a critical approach referencing the = usual litterature on quantum attacks on bitcoin. Just
note the caveat,= one can find in usual quantum complexity litterature, "particularly in reg= ard to end-to-end physical resource estimation. There are many other error = correction
techniques available, and the best choice will likely depen= d on the underlying architecture's characteristics, such as the available p= hysical qubit=E2=80=93qubit connectivity" (verbatim). Namely, evaluating qu= antum attacks is very dependent on the concrete physical architecture under= pinning it.

All that said, I agree with you that if you see a qu= antum computer with the range of 1000 physical qubits being able to break t= he DLP for ECC based encryption like secp256k1, even if it takes a year it = will be a Chicago Pile moment, or whatever comparative experiments which we= re happening about chain of nuclear reactions in 30s / 40s.

>= =C2=A0I think it's time to revisit these discussions given IBM's progress.= They've published a two videos in particular that are worth watching; thei= r keynote from December of last > year [2], and their roadmap update fro= m just last month [3]

I have looked on the roadmap as it's avail= able on the IBM blog post: https://www.ibm.com/quantum/blog/quantum-roadmap-2033#mark-roadmap-out-t= o-2033
They give only a target of 2000 logical qubit to be reach i= n 2033...which is surprisingly not that strong...And one expect they might = hit likely solid
state issues in laying out in hardware the Heron proc= essor architecture. As a point of thinking, it took like 2 decades to advan= ce on the state of art
of litography in traditional chips manufacturin= g.
=C2=A0
So I think it's good to stay cool minded and I think my= observation about talking of "super-exponential rate" as used in maaku old= blog post does not
hold a lot of rigor to describe the advances in th= e field of quantum computing. Note, also how IMB is a commercial entity tha= t can have a lot of interests
in "pumping" the state of "quantum compu= ting" to gather fundings (there is a historical anecdote among bitcoin OG c= ircles about Vitalik trying to do an
ICO to build a quantum computer l= ike 10 years ago, just to remember).

> I'm supportive of this= consideration. FALCON might be a good substitute, and maybe it can be upgr= aded to HAWK for even better performance depending on how much > time th= ere is. According to the BIP, FALCON signatures are ~10x larger t> han S= chnorr signatures, so this will of course make the transaction more expensi= ve, but we also > must remember, these signatures will be going into the= witness, which already receives a 4x discount. Perhaps the discount could = be incr> eased further someday to fit > more transactions into blocks= , but this will also likely result in more inscriptions filling unused spac= e also, which permanently increases the burden of running an archive > n= ode. Due to the controversy s> uch a change could bring, I would rather = any increases in the witness discount be excluded from future activation di= scussions, so as to be > considered separately, even if it pertains to a= n increase in P2QRH transaction size.
=C2=A0
> Do you think it= 's worth reworking the BIP to use FALCON signatures? I've only done a deep = dive into SQIsign and SPHINCS+, and I will acknowledge the readiness levels= between those two are presently worlds apart.

I think FALCON is= what has the smallest pubkey + sig size for hash-and-sign lattice-based sc= hemes. So I think it's worth reworking the BIP to see what has the smallest= generation / validation time and pubkey + size space for the main post-qua= ntum scheme. At least for dilthium, falcon, sphincs+ and SQISign. For an hy= pothetical witness discount, a v2 P2QRH could be always be moved in a very = template annex tag / field.

> Also, do you think it's of any = concern to use HASH160 instead of HASH256 in the output script? I think it'= s fine for a cryptographic commitment since it's simply a hash of a hash (M= D160 of SHA-256).

See literature on quantum attacks on bitcoin i= n the reference of the paper you quote ("The impact of hardware specificati= ons on reaching quantum advantage in the fault tolerant regime") for a disc= ussion on Grover's search algorithm.

> I'm not sure I fully u= nderstand this, but even more practically, as mentioned in the BIP, value c= an simply be kept in P2WPKH outputs, ideally with a value of fewer than 50<= div>> coins per address, and when funds ever need to be spent, the> = =C2=A0transaction is signed and submitted out of band to a trusted mining p= ool, ideally one that does KYC, so it's
> known which individu= al miners get to see the public key before it's mined. It's not perfect, si= nce this relies on exogenou> s security assumptions, which is why P2QRH = is
> proposed.

Again, the paper you're referencing = ("The impact of hardware specifications on reaching quantum advantage...") = is analyzing the performance of quantum advantage under
2 dimensions, = namely space and time. My observation is in Bitcoin we have an additional d= imension, "coin scarcity" that can be leveraged to build defense of address=
spends in face of quantum attacks.

Namely you can introduc= e an artifical "witness-stack size scale ladder" in pseudo-bitcoin script: = OP_SIZE <1000> OP_EQUALVERIFY OP_DROP ...checksig...
I have not = verified it works well on bitcoin core though this script should put the bu= rden on the quantum attacker to have enough bitcoin amount available to bur= n in on-chain fees in witness size to break a P2WPKH.


> =C2=A0ideally with a value of fewer than 50 coins per address, and wh= en funds ever need to be spent, the transaction is signed and submitted out= of band to a trusted mining pool, ideally
> one that does KYC, so = it's known which individual > miners get to see the public key before it= 's mined. It's not perfect, since this relies on exogenous security assumpt= ions, which is
> why P2QRH is proposed.

The t= echnical issue if you implement KYC for a mining pool you're increasing you= r DoS surface and this could be exploited by competing miners. A more reaso= nable security model can be to have miner coinbase pubkeys being used to co= mmit to the "seen-in-mempool" spends and from then build "hand wawy" fraud = proofs that a miner is quantum attacking you're P2WSH spends at pubkey reve= al time during transaction relay.

Best,
Antoine
<= br />
ots hash:=C2=A01ad818955bbf0c5468847c00c2974ddb5cf609d63052= 3622bfdb27f1f0dc0b30
Le lundi 17 juin 2024 =C3= =A0 23:25:25 UTC+1, hunter a =C3=A9crit=C2=A0:

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 2024-06-16 19:31, Antoine Riard <antoin...@= gmail.com> wrote:

>
> Hi Hunter Beast,I think any post-quantum upgrade signature algor= ithm upgrade proposal would grandly benefit to haveShor's based practical a= ttacks far more defined in the Bitcoin context. As soon you start to talk a= boutquantum computers there is no such thing as a "quantum computer" though= a wide array of architecturesbased on a range of technologies to encode qu= bits on nanoscale physical properties.
>
Good point. I can write a section in the BIP Motivation or Security s= ection about how an attack might take place practically, and the potential = urgency of such an attack.
=C2=A0
I was thinking of focusing on the IBM Quantum System Two, mention how= it can be scaled, and that although it might be quite limited, if running = Shor's variant for a sufficient amount of time, above a certain minimum thr= eshold of qubits, it might be capable of decrypting the key to an address w= ithin one year. I base this on the estimate provided in a study by the Suss= ex Centre for Quantum Technologies, et. al [1]. They provide two figures, 3= 17M qubits to decrypt in one hour, 13M qubits to decrypt in one day. It wou= ld seem it scales roughly linearly, and so extrapolating it further, 36,000= qubits would be needed to decrypt an address within one year. However, the= IBM Heron QPU=C2=A0turned out to have a gate time 100x less than was estim= ated in 2022, and so it might be possible to make do with even fewer qubits= still within that timeframe. With only 360 qubits, barring algorithmic ove= rhead such as for circuit memory, it might be possible to=C2=A0decrypt a si= ngle address within a year. That might sound like a lot, but being able to= =C2=A0accomplish that=C2=A0at all would be significant, almost like a Chica= go Pile moment, proving something in practice that was previously only thou= ght theoretically possible for the past 3 decades. And it's only downhill f= rom there...
>
> This is not certain that any Shor's algorithm variant works smoo= thly independently of the quantum computerarchitecture considered (e.g gate= frequency, gate infidelity, cooling energy consumption) and I think it'san= interesting open game-theory problem if you can concentrate a sufficiant a= mount of energy before anycoin owner moves them in consequence (e.g seeing = a quantum break in the mempool and reacting with a counter-spend).
>
It should be noted that P2PK keys still hold millions of bitcoin, and= those encode the entire public key for everyone to see for all time. Thus,= early QC attacks won't need to consider the=C2=A0complexities of the mempo= ol.
>
> In my opinion, one of the last time the subject was addressed on= the mailing list, the description of the state of the quantum computer fie= ld was not realistic and get into risk characterization hyperbole talking a= bout "super-exponential rate" (when indeed there is no empirical realizatio= n=C2=A0that distinct theoretical advance on quantum capabilities=C2=A0can b= e combined with each other) [1].
>
I think it's time to revisit these discussions given IBM's progress. = They've published a two videos in particular that are worth watching; their= keynote from December of last year [2], and their roadmap update from just= last month [3].
>
> On your proposal, there is an immediate observation which comes = to mind, namely why not using one of the algorithm(dilthium, sphincs+, falc= on) which has been through the 3 rounds of NIST cryptanalysis. Apart of the= signature size,which sounds to be smaller, in a network of full-nodes any = PQ signature algorithm should have reasonable verificationperformances.
>
I'm supportive of this consideration. FALCON might be a good substitu= te, and maybe it can be upgraded to HAWK for even better performance depend= ing on how much time there is. According to the BIP, FALCON signatures are = ~10x larger than Schnorr signatures, so this will of course make the transa= ction more expensive, but we also must remember, these signatures will be g= oing into the witness, which already receives a 4x discount. Perhaps the di= scount could be increased further someday to fit more transactions into blo= cks, but this will also likely result in more inscriptions filling unused s= pace also, which permanently increases the burden of running an archive nod= e. Due to the controversy such a change could bring, I would rather any inc= reases in the witness discount be excluded from future activation discussio= ns, so as to be considered separately, even if it pertains to an increase i= n P2QRH transaction size.
=C2=A0
Do you think it's worth reworking the BIP to use FALCON signatures? I= 've only done a deep dive into SQIsign and SPHINCS+, and I will acknowledge= the readiness levels between those two are presently worlds apart.
=C2=A0
Also, do you think it's of any concern to use HASH160 instead of HASH= 256 in the output script? I think it's fine for a cryptographic commitment = since it's simply a hash of a hash (MD160 of SHA-256).
>
> Lastly, there is a practical defensive technique that can be imp= lemented today by coin owners to protect in face ofhyptothetical quantum ad= versaries. Namely setting spending scripts to request an artificially infla= ted witness stack,as the cost has to be burden by the spender. I think one = can easily do that with OP_DUP and OP_GREATERTHAN and a bitof stack shuffli= ng. While the efficiency of this technique is limited by the max consensus = size of the script stack(`MAX_STACK_SIZE`) and the max consensus size of st= ack element (`MAX_SCRIPT_ELEMENT_SIZE`), this adds an additional"scarce coi= ns" pre-requirement on the quantum adversarise to succeed. Shor's algorithm= is only defined under theclassic ressources of computational complexity, t= ime and space.
>
I'm not sure I fully understand this, but even more practically, as m= entioned in the BIP, value can simply be kept in P2WPKH outputs, ideally wi= th a value of fewer than 50 coins per address, and when funds ever need to = be spent, the transaction is signed and submitted out of band to a trusted = mining pool, ideally one that does KYC, so it's known which individual mine= rs get to see the public key before it's mined. It's not perfect, since thi= s relies on exogenous security assumptions, which is why P2QRH is proposed.
>
> Best,Antoine
> [1]=C2=A0https://freicoin.substack.co= m/p/why-im-against-taproot
>
=C2=A0
I'm grateful you took the time to review the BIP and offer your detai= led insights.
=C2=A0
[1] =E2=80=9CThe impact of hardware specifications on reaching quantu= m advantage in the fault tolerant regime,=E2=80=9D 2022=C2=A0-=C2=A0http= s://pubs.aip.org/avs/aqs/article/4/1/013801/2835275/The-impact-of-hardware-= specifications-on-reaching
[2]=C2=A0https://www.youtube.com/watch?v=3DDe2IlWji= 8Ck
[3]=C2=A0https://www.youtube.com/watch?v=3Dd5aIx79O= Tps
=C2=A0
>
>
> Le vendredi 14 juin 2024 =C3=A0 15:30:54 UTC+1, Hunter Beast a = =C3=A9crit=C2=A0:
>
> > Good points. I like your suggestion for a SPHINCS+, just du= e to how mature it is in comparison to SQIsign. It's already in its third r= ound and has several standards-compliant implementations, and it has an act= ual specification rather than just a research paper. One thing to consider = is that NIST-I round 3 signatures are 982 bytes in size, according to what = I was able to find in the documents hosted by the SPHINCS website.
> > https://web.archive.org/web/20230711000109if_/http://sph= incs.org/data/sphincs+-round3-submission-nist.zip
> > =C2=A0
> > One way to handle this is to introduce this as a separate a= ddress type than SQIsign. That won't require OP_CAT, and I do want to keep = this soft fork limited in scope. If SQIsign does become significantly broke= n, in this hopefully far future scenario, I might be supportive of an incre= ase in the witness discount.
> > =C2=A0
> > Also, I've made some additional changes based on your feedb= ack on X. You can review them here if you so wish:
> > https://github.com/cry= ptoquick/bips/pull/5/files?short_path=3D917a32a#diff-917a32a71b69bf62d7c85d= fb13d520a0340a30a2889b015b82d36411ed45e754
> >
> >
> > On Friday, June 14, 2024 at 8:15:29=E2=80=AFAM UTC-6 Pierre= -Luc Dallaire-Demers wrote:
> > > SQIsign is blockchain friendly but also very new, I wo= uld recommend adding a hash-based backup key in case an attack on SQIsign i= s found in the future (recall that SIDH broke over the span of a weekend=C2= =A0https://eprint.iacr.org/2022/975.pdf).
> > > Backup keys can be added in the form of a Merkle tree = where one branch would contain the SQIsign public key and the other the pub= lic key of the recovery hash-based scheme. For most transactions it would o= nly add one bit to specify the SQIsign branch.
> > > The hash-based method could be Sphincs+, which is stan= dardized by NIST but requires adding extra code, or Lamport, which is not s= tandardized but can be verified on-chain with OP-CAT.
> > >
> > > On Sunday, June 9, 2024 at 12:07:16=E2=80=AFp.m. UTC-4= Hunter Beast wrote:
> > > > The motivation for this BIP is to provide a concr= ete proposal for adding quantum resistance to Bitcoin. We will need to pick= a signature algorithm, implement it, and have it ready in event of quantum= emergency. There will be time to adopt it. Importantly, this first step is= a more substantive answer to those with concerns beyond, "quantum computer= s may pose a threat, but we likely don't have to worry about that for a lon= g time". Bitcoin development and activation is slow, so it's important that= those with low time preference start discussing this as a serious possibil= ity sooner rather than later. This is meant to be the first in a series of= BIPs regarding a hypothetical "QuBit" soft fork. The BIP is intended to pr= opose concrete solutions, even if they're early and incomplete, so that Bit= coin developers are aware of the existence of these solutions and their pot= ential. This is just a rough draft and not the finished BIP. I'd like to v= alidate the approach and hear if I should continue working on it, whether s= erious changes are needed, or if this truly isn't a worthwhile endeavor rig= ht now.
> > > > =C2=A0
> > > > The BIP can be found here:
> > > > https://gi= thub.com/cryptoquick/bips/blob/p2qrh/bip-p2qrh.mediawiki
> > > > =C2=A0
> > > > Thank you for your time.
> > > > =C2=A0
> > > >
> > >
> > >
> >
> >
>
>
> -- You received this message because you are subscribed to a top= ic in the Google Groups "Bitcoin Development Mailing List" group. To unsubs= cribe from this topic, visit https= ://groups.google.com/d/topic/bitcoindev/Aee8xKuIC2s/unsubscribe. To uns= ubscribe from this group and all its topics, send an email to bitcoindev+...@googlegroups.com. To view this discussion on the = web visit https://groups.google.com/d/msgid/bitcoindev/87b4e402-39d8-46b0-82= 69-4f81fa501627n%40googlegroups.com.

-----BEGIN PGP SIGNATURE-----
Version: OpenPGP.js v4.10.3
Comment: https://openpgpjs.org

wsBcBAEBCAAGBQJmcJwuAAoJEDEPCKe+At0hjhkIAIdM7QN9hAO0z+KO7Bwe
JT45XyusJmDG1gJbLZtb+SfuE1X5PFDHNTLSNliJWsOImxFCiBPnlXhYQ4B/
8gST3rqplUwkdYr52E5uMxTTq9YaXTako4PNb8d7XfraIwDKXAJF+5Skf4f9
bQUYMieBAFSEXCmluirQymB+hUoaze60Whd07hhpzbGSwK4DdSXltufkyCDE
tJUforNWm8X25ABTSNDh3+if5V/wJuix/u8GJyMHKucaEAO01ki2oyusq2rt
Xe6ysUieclusFFdQAs4PfYxhzXTf5XeAbFga/qxrVtbt7q2nUkYklqteT2pp
mH/DU20HMBeGVSrISrvsmLw=3D
=3D+wat
-----END PGP SIGNATURE-----

--
You received this message because you are subscribed to the Google Groups &= quot;Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an e= mail to bitcoind= ev+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msg= id/bitcoindev/5d43fbd6-723d-4d3d-bc35-427c36a4a06an%40googlegroups.com.=
------=_Part_189169_163916722.1727265883481-- ------=_Part_189168_472488236.1727265883481--