From mboxrd@z Thu Jan 1 00:00:00 1970 Delivery-date: Thu, 22 Aug 2024 00:29:20 -0700 Received: from mail-yw1-f186.google.com ([209.85.128.186]) by mail.fairlystable.org with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (Exim 4.94.2) (envelope-from ) id 1sh2Fw-0006Uu-3a for bitcoindev@gnusha.org; Thu, 22 Aug 2024 00:29:19 -0700 Received: by mail-yw1-f186.google.com with SMTP id 00721157ae682-6b8d96aa5ebsf11541957b3.1 for ; Thu, 22 Aug 2024 00:29:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlegroups.com; s=20230601; t=1724311750; x=1724916550; darn=gnusha.org; h=list-unsubscribe:list-subscribe:list-archive:list-help:list-post :list-id:mailing-list:precedence:x-original-sender:mime-version :subject:references:in-reply-to:message-id:to:from:date:sender:from :to:cc:subject:date:message-id:reply-to; bh=1AXJqu7cxSE8wMmy4smOPbtvQfuZejJYb/G4suG+Gfg=; b=FV36fvWOWKlodUgKHAaoCEbV6tBinUtdZJrf2kz9AZMVdfuliNCY7AuO6oCq8mhPdB CCk2B/Ck+1CrYTWUA5UMLWoGLtCtvNlH0qMWefnLzCcucn9Fe06jUVfzlS51GWWC/nnB IXUfNGla9b7hqMYplh0XU+wHCZo2h4zqzvjbh+At9GNJ0UMdKa6soaOf6WIIrz1X3FFf qgvA/zTsXSyxQubxU7SBe9NUaxoSej5cZyWBjjhpnR6hc3w2EHe5wUplv2td2uK40fcD ns2DcWl1bWaTFmWQd7iiSMQQ4DXHhmat5/VNqpk0RSUa0hVoc34BHHFkKCzoWsou4ec4 SwqA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1724311750; x=1724916550; darn=gnusha.org; h=list-unsubscribe:list-subscribe:list-archive:list-help:list-post :list-id:mailing-list:precedence:x-original-sender:mime-version :subject:references:in-reply-to:message-id:to:from:date:from:to:cc :subject:date:message-id:reply-to; bh=1AXJqu7cxSE8wMmy4smOPbtvQfuZejJYb/G4suG+Gfg=; b=BpbTTmeUJptHs1r0p/fFayn0eibjaelwOmR3t4t3f7K3qZ/zW6JS3KQ+7WkBZ7MsjA 0JpCSk69suVMver5l2NP+tq1iOuTkKlxdv+ecezNsGIz3JPi3+MALc2EH8JYjmBMXu8m Qw8bor4VoHZoXa4lwVRWTkFUhnH1hwVh9w2aFGH54NVyfvpTU8ylfzuzc4/04zj8IzBI qRaj06Zk734Trm0sXHfsjoUm1E4KygDXJIjt9/O/qD372S0QVi2IwVP42RH5CZEQ1ctH nBc4ZDhUGFNQSURa1MxbL51RG2rhsjPCCnnLZt1IzoVYCbhB5x2wMTjB3li5D1gAGcJt dPUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724311750; x=1724916550; h=list-unsubscribe:list-subscribe:list-archive:list-help:list-post :list-id:mailing-list:precedence:x-original-sender:mime-version :subject:references:in-reply-to:message-id:to:from:date:x-beenthere :x-gm-message-state:sender:from:to:cc:subject:date:message-id :reply-to; bh=1AXJqu7cxSE8wMmy4smOPbtvQfuZejJYb/G4suG+Gfg=; b=WkKMGoiwD63YomPw+Fcji586J1/Mj00KhpjZIBj99zrN+NomR4fijduY6JTRLtfvSL TRvRiMN7wj1ByzyqTaxo9VyebjoAo+mlZuaq7TTPadHf149F5r4nKMoiSYDvD7K/ivxG S+hGnnm3buLxNknOCVANMLCEQ1KU+sotIU259g9w67hPJaIZcIwfg7TxCgi14BZwYuf1 mKoFthDBZ3iff7/0eo6M6r3IBjwKTfOaIvbH33ABowQfkkHWcnCdyzKnoH6XKpNGD06n 1orfwRn7rO3Tni95UYR9wAGglr/OGEbEqLvHoaYY5pvDku6ssoaDDhwwe3R2Z3QHSDt6 qomg== Sender: bitcoindev@googlegroups.com X-Forwarded-Encrypted: i=1; AJvYcCW2Jes8jYdybrUXT1R4PD3/cELPYnt+kyWvxP/ygafc/CSPtSvrjhlfCPwdqVvODmZCKhS8bmn33fmC@gnusha.org X-Gm-Message-State: AOJu0Yx0DZ0a+72s+sOXzfaq44Syu9BxDINqbiS0yOzvJJ7D8ViBBkCI w0FIISi43GkYHpbfberS0wuH6mxdOKvU/4FL8QBbZ+lB5mZeAU4w X-Google-Smtp-Source: AGHT+IEWVCjvr1hAOWacbd1FiKzBIpcQ2ZQqYc9yWGOHPcq4X1E88nmNhIjrwV9WWREqvQsRWf58Qw== X-Received: by 2002:a05:6902:1547:b0:e16:5944:f306 with SMTP id 3f1490d57ef6-e17903c091fmr1605084276.35.1724311749227; Thu, 22 Aug 2024 00:29:09 -0700 (PDT) X-BeenThere: bitcoindev@googlegroups.com Received: by 2002:a05:6902:72e:b0:e14:d13e:ee39 with SMTP id 3f1490d57ef6-e178bd483efls606467276.2.-pod-prod-02-us; Thu, 22 Aug 2024 00:29:07 -0700 (PDT) X-Received: by 2002:a05:690c:680e:b0:65c:1db1:9235 with SMTP id 00721157ae682-6c095f5633bmr3589557b3.0.1724311747194; Thu, 22 Aug 2024 00:29:07 -0700 (PDT) Received: by 2002:a0d:f483:0:b0:699:2980:4ef6 with SMTP id 00721157ae682-6c0ef620d2cms7b3; Wed, 21 Aug 2024 23:20:21 -0700 (PDT) X-Received: by 2002:a05:6902:2288:b0:e0b:d2e3:4da7 with SMTP id 3f1490d57ef6-e177e0ca53emr1984535276.18.1724307620497; Wed, 21 Aug 2024 23:20:20 -0700 (PDT) Date: Wed, 21 Aug 2024 23:20:20 -0700 (PDT) From: Antoine Riard To: Bitcoin Development Mailing List Message-Id: <264e0340-ddfa-411c-a755-948399400b08n@googlegroups.com> In-Reply-To: References: <62fd28ab-e8b5-4cfc-b5ae-0d5a033af057n@googlegroups.com> <87b4e402-39d8-46b0-8269-4f81fa501627n@googlegroups.com> <2cbd432f-ca19-4481-93c5-3b0f7cdea1cb@DS3018xs> <1b86f467-95e5-4558-98bc-b921dd29e1afn@googlegroups.com> Subject: Re: [bitcoindev] Re: Proposing a P2QRH BIP towards a quantum resistant soft fork MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_Part_34112_1313790278.1724307620289" X-Original-Sender: antoine.riard@gmail.com Precedence: list Mailing-list: list bitcoindev@googlegroups.com; contact bitcoindev+owners@googlegroups.com List-ID: X-Google-Group-Id: 786775582512 List-Post: , List-Help: , List-Archive: , List-Unsubscribe: , X-Spam-Score: -0.5 (/) ------=_Part_34112_1313790278.1724307620289 Content-Type: multipart/alternative; boundary="----=_Part_34113_1252241700.1724307620289" ------=_Part_34113_1252241700.1724307620289 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hello Hunter, > Well, it's also important to remember that for every qubit added, it=20 doubles the power of the system. A 2,000 qubit cryptographically-relevant= =20 quantum computer (CRQC) is exponentially faster than a 1,000 qubit one.=20 There's also the > capability for cross-links for multiple chips to=20 communicate with each other, which IBM is also researching. The IBM Quantum= =20 System Two can be upgraded to support 16,000 qubits according to their=20 marketing. Also consider that the ve> rification of the results from the=20 CRQC can be done via classical computer, so a high level of error=20 correction might not be as necessary so long as the program is run enough= =20 times. It will take much longer, of course. On performance, once again I think it all depends on the quantum computer= =20 architecture considered and if we're talking about physical qubits /=20 logical qubits. As the paper "The impact of hardware specifications on=20 reaching quantum advantage in the fault tolerant regime" linked in your BIP= =20 judiciously observe in its introduction that surface code (as used by IBM)= =20 is only one of the error code correction technique. About cross-links for multiple chips, even if each chip parallelize towards= =20 a single classical logical unit, ordering computational units is a=20 notoriously hard issue in classical computer. I don't think there is any=20 certainty in quantum computer development that each set of qubits of=20 isolated chips can be arithmetically additioned without a coefficient loss= =20 on the resulting sum (...there is always a bit of apprehension to have to= =20 dissociate between marketing claims and academic claim duly=20 peer-reviewed...). And while indeed, the results can be evaluated via a=20 classical computer, this doesn't mean transitively that the evaluation will= =20 be as efficient (in energy / computational cycles) rather than doing more= =20 error correction on the quantum computer side. > I've decided in one of my more recent updates to the BIP to default to=20 the highest level of NIST security, NIST V, which provides 256 bits of=20 security. You can see my rationale for that in this PR: > https://github.com/cryptoquick/bips/pull/7/files Those are assumptions there is a security increase by scaling up the size= =20 of the public key. In the Bitcoin world, we don't even make assumption on= =20 the public key size for ECDSA signature scheme as both compressed and uncompressed public keys= =20 have been historically valid. Similarly, the public key size does not have= =20 to be bundled with the specification of the signature verification scheme itself (e.g see=20 BIP340 discussion on x-only public keys). > As such, you'll see FALCON is roughly 4x larger than SQIsign signatures.= =20 Although supersingular elliptic curve quaternion isogeny-based algorithms= =20 are newer and > more experimental than lattice-based cryptography, I think the benefits= =20 outweigh the risks, especially when transaction throughput is a principal= =20 concern. =20 There are no public key size in the security table so it's hard to compare= =20 the overall on-chain space cost for each signature post-quantum algorithm= =20 considered. Neither actually, there is an estimation of the verification cost for an=20 average 200-bytes transactions, old good's Hamilton's quaternion and=20 relying on complex numbers, which can be hard to deal with for the hobbyist= =20 CPUs can be a concern. > It's crucial that the signature and public key both receive the witness= =20 discount. Can you go into more detail in how that might be accomplished? The BIP341 taproot annex could be used for that, see=20 https://github.com/bitcoin/bips/blob/master/bip-0341.mediawiki#cite_note-5 > Although it's too early to talk about activation of a QuBit soft fork,=20 I've put some thought into how we can maintain the existing Bitcoin=20 throughput with a soft fork, and I think it might be prudent to, when the= =20 time comes, introdu> ce a 4x additional QuBit witness discount, maybe we=20 call it the quitness, which is only available to valid P2QRH signatures.=20 This would preclude its abuse for things like inscriptions because the=20 signature data would need to corresp> ond to the key, and even if this were= =20 possible, it's likely to result in only a burner address. This would=20 increase chain state growth from roughly 100GB/yr to possibly closer to=20 2-300GB, depending on adoption. As the state of the a> rt of SSD technology= =20 advances, this should allow plebs to run their own node on a 4TB disk for= =20 over a decade, even including existing chain size of ~600GB. The annex could have typed fields for post-quantum signature and public key= =20 further witness discount. However, I think it's a bit naive to assume that= =20 SSD technology advances will stay linear and that it will be economically= =20 accessible at the same pace to the tens of thousands of plebs actually=20 running full-nodes and constituting the skeleton of the base-relay network.= =20 One could play out a posteriori the predictions on bandwidth technological= =20 advances that have been made in BIP103 to see how well they held on the=20 last ~9 years. (There is another caution with evaluating technological advances, namely=20 that some hardware components could be actually massively consumed by other= =20 cryptocurrencies for their consensus algorithms...) > If we were to use the same approach for FALCON signatures, a 16x discount= =20 would be needed, and I think that's far too much for the community to=20 accept. As for pub key size and verification > time, these are secondary considerations if the primary constraint is=20 maintaining present transaction throughput. That's what makes SQIsign so=20 promising. Well, if there is something like the annex with typed fields each type of= =20 post-quantum signature could get a wider discount, especially if there are= =20 verification asymmetries favoring some scheme over another one, even if the= =20 security properties are differing. > The Impact paper seems to dismiss Grover's algorithm, but I think it's=20 important to err on the size of caution and instead use a 32-byte double=20 SHA-2 (HASH256) for additional security in the P2QRH output. Performance-wise, this doesn't shock me to use a double SHA-2 (HASH256) as= =20 it has been added for many domain separation tagged hash in taproot. About Grover's algorithm, it's more the sample space and collision space=20 that should be more defined to be relevant, you can always downgrade the=20 performance of the Grover's algorithm by scaling up the sample space,=20 however it's not sure it's practical for bitcoin transaction generation. > I'm not sure I understand what you mean by this... > Is your coin scarcity comment related to what I call "satoshi's shield"= =20 in the BIP? Not at all the "satoshi's shield" as you're describing in the BIP. This is just the observation that bitcoin coins are scarce in the sense=20 that you need to burn raw energy to acquire the rewards according to the=20 issuance schedule (or miners fees). Bitcoin script can be designed to=20 request that a sufficient number of bitcoin coins, or satoshis, are burned= =20 before to unlock a coin locked under a quantum-frail scriptpubkey. That means any quantum computer attacker, even if they have an efficient=20 quantum computer, might not be able to break the redeem script itself, only= =20 the signatures composing the redeem script check sig operations. Let's give a concrete example, let's say you have the following pseudo=20 script: < <998> =20 > Interpeted the following script should request from the spending party,=20 whatever it is to provide a witness stack of length 998 bytes, all dummy=20 elements. Those dummy elements are putting the burden on the quantum computer=20 attacker to burn fees at the current sat per vbyte rate to realize a=20 quantum exploit. (There could leverage SIGHASH_NONE to escape this "fee jail"... however it= =20 sounds to expose them to be overrided by a miner). So assuming this defensive scheme in face of quantum exploit is sound, I=20 think this put the burden of a quantum attacker to have hashrate=20 capabilities at the current level of difficulty, not solely an efficient=20 CRQC. > Yes, this makes more sense. I'm not sure anything can be done with the=20 fraud proofs, but they could at least prove that a bad actor is present.=20 Ideally both approaches are combined for maximum security and=20 accountability. No KYC is necessarily hurting mining pools as there is no single kyc=20 definition that you can implement that do not open the door for a kind of= =20 DoS exploitation. This is not an issue to build a practical fraud proofs systems on seen=20 transaction, the open question is more if the average bitcoin user would=20 pay to download fraud proofs demonstrating that a given miner is not=20 engaging in quantum exploit. > I've taken Antoine's feedback to heart and added FALCON to the=20 specification, including a section that addresses the increased maintenance= =20 burden of adding two distinct post-quantum cryptosystems. Thanks you for the addition, for the maintenance burden there is always the= =20 counter-argument to be made that you can secure a coins under multiple=20 post-quantun signature scheme, especially if they're from different=20 hardness assumptions breed. If one of the two scheme is secure, the coins= =20 are still locked by the other half. I think it could be interesting to split the BIP in multiple ones, one for= =20 the general consensus mechanism introducing a P2QRH with all quantum risks= =20 considerations, and an individual one for each signature algorithm that=20 could be deployed udner this generic P2QRH. Kinda in the same way, that=20 BIP340 / BIP341 are split. Best, Antoine ots hash: b57e9fe0b3de603ca66be29b7f1ba04fa5b8bc516c1277114ab42ac9f8572e12 Le jeudi 15 ao=C3=BBt 2024 =C3=A0 06:25:01 UTC+1, Hunter Beast a =C3=A9crit= : > I've taken Antoine's feedback to heart and added FALCON to the=20 > specification, including a section that addresses the increased maintenan= ce=20 > burden of adding two distinct post-quantum cryptosystems. > Please review. > https://github.com/cryptoquick/bips/pull/9/files > > On Tuesday, August 6, 2024 at 11:50:35=E2=80=AFAM UTC-6 Hunter Beast wrot= e: > >> That's alright, Antoine, it's been a busy month for me too. >> >> > So I think it's good to stay cool minded and I think my observation=20 >> about talking of "super-exponential rate" as used in maaku old blog post= =20 >> does not >> > hold a lot of rigor to describe the advances in the field of quantum= =20 >> computing. Note, also how IMB is a commercial entity that can have a lot= of=20 >> interests >> > in "pumping" the state of "quantum computing" to gather fundings (ther= e=20 >> is a historical anecdote among bitcoin OG circles about Vitalik trying t= o=20 >> do an >> > ICO to build a quantum computer like 10 years ago, just to remember). >> >> Well, it's also important to remember that for every qubit added, it=20 >> doubles the power of the system. A 2,000 qubit cryptographically-relevan= t=20 >> quantum computer (CRQC) is exponentially faster than a 1,000 qubit one.= =20 >> There's also the capability for cross-links for multiple chips to=20 >> communicate with each other, which IBM is also researching. The IBM Quan= tum=20 >> System Two can be upgraded to support 16,000 qubits according to their= =20 >> marketing. Also consider that the verification of the results from the C= RQC=20 >> can be done via classical computer, so a high level of error correction= =20 >> might not be as necessary so long as the program is run enough times. It= =20 >> will take much longer, of course. >> >> > I think FALCON is what has the smallest pubkey + sig size for=20 >> hash-and-sign lattice-based schemes. So I think it's worth reworking the= =20 >> BIP to see what has the smallest generation / validation time and pubkey= +=20 >> size space for the main post-quantum scheme. At least for dilthium, falc= on,=20 >> sphincs+ and SQISign. For an hypothetical witness discount, a v2 P2QRH= =20 >> could be always be moved in a very template annex tag / field. >> >> I've decided in one of my more recent updates to the BIP to default to= =20 >> the highest level of NIST security, NIST V, which provides 256 bits of= =20 >> security. You can see my rationale for that in this PR: >> https://github.com/cryptoquick/bips/pull/7/files >> Then, referencing this table: >> >> https://github.com/cryptoquick/bips/blob/p2qrh/bip-p2qrh.mediawiki#secur= ity >> As such, you'll see FALCON is roughly 4x larger than SQIsign signatures.= =20 >> Although supersingular elliptic curve quaternion isogeny-based algorithm= s=20 >> are newer and more experimental than lattice-based cryptography, I think= =20 >> the benefits outweigh the risks, especially when transaction throughput = is=20 >> a principal concern. >> >> It's crucial that the signature and public key both receive the witness= =20 >> discount. Can you go into more detail in how that might be accomplished? >> >> Although it's too early to talk about activation of a QuBit soft fork,= =20 >> I've put some thought into how we can maintain the existing Bitcoin=20 >> throughput with a soft fork, and I think it might be prudent to, when th= e=20 >> time comes, introduce a 4x additional QuBit witness discount, maybe we c= all=20 >> it the quitness, which is only available to valid P2QRH signatures. This= =20 >> would preclude its abuse for things like inscriptions because the signat= ure=20 >> data would need to correspond to the key, and even if this were possible= ,=20 >> it's likely to result in only a burner address. This would increase chai= n=20 >> state growth from roughly 100GB/yr to possibly closer to 2-300GB, depend= ing=20 >> on adoption. As the state of the art of SSD technology advances, this=20 >> should allow plebs to run their own node on a 4TB disk for over a decade= ,=20 >> even including existing chain size of ~600GB. >> >> If we were to use the same approach for FALCON signatures, a 16x discoun= t=20 >> would be needed, and I think that's far too much for the community to=20 >> accept. As for pub key size and verification time, these are secondary= =20 >> considerations if the primary constraint is maintaining present transact= ion=20 >> throughput. That's what makes SQIsign so promising. >> >> > See literature on quantum attacks on bitcoin in the reference of the= =20 >> paper you quote ("The impact of hardware specifications on reaching quan= tum=20 >> advantage in the fault tolerant regime") for a discussion on Grover's=20 >> search algorithm. >> >> The Impact paper seems to dismiss Grover's algorithm, but I think it's= =20 >> important to err on the size of caution and instead use a 32-byte double= =20 >> SHA-2 (HASH256) for additional security in the P2QRH output. >> >> > Namely you can introduce an artifical "witness-stack size scale ladder= "=20 >> in pseudo-bitcoin script: OP_SIZE <1000> OP_EQUALVERIFY OP_DROP=20 >> ...checksig... >> > I have not verified it works well on bitcoin core though this script= =20 >> should put the burden on the quantum attacker to have enough bitcoin amo= unt=20 >> available to burn in on-chain fees in witness size to break a P2WPKH. >> >> I'm not sure I understand what you mean by this... >> Is your coin scarcity comment related to what I call "satoshi's shield"= =20 >> in the BIP? >> >> > The technical issue if you implement KYC for a mining pool you're=20 >> increasing your DoS surface and this could be exploited by competing=20 >> miners. A more reasonable security model can be to have miner coinbase= =20 >> pubkeys being used to commit to the "seen-in-mempool" spends and from th= en=20 >> build "hand wawy" fraud proofs that a miner is quantum attacking you're= =20 >> P2WSH spends at pubkey reveal time during transaction relay. >> >> Yes, this makes more sense. I'm not sure anything can be done with the= =20 >> fraud proofs, but they could at least prove that a bad actor is present.= =20 >> Ideally both approaches are combined for maximum security and=20 >> accountability. >> >> Thanks for your time! >> >> On Friday, July 12, 2024 at 7:44:27=E2=80=AFPM UTC-6 Antoine Riard wrote= : >> >> Hi Hunter Beast, >> >> Apologies for the delay in answer. >> >> > I was thinking of focusing on the IBM Quantum System Two, mention how= =20 >> it can be scaled, and that although it might be quite limited, if runnin= g=20 >> Shor's variant for a > sufficient amount of time, above a certain minimu= m=20 >> threshold of qubits, it might be capable of decrypting the key to an=20 >> address within one year. I base this on the estimate > provided in a stu= dy=20 >> by the Sussex Centre for Quantum Technologies, et. al [1]. They provide = two=20 >> figures, 317M qubits to decrypt in one hour, 13M qubits to decrypt in on= e >=20 >> day. It would seem it scales roughly linearly, and so extrapolating it= =20 >> further, 36,000 qubits would be needed to decrypt an address within one= =20 >> year. However, the IBM Heron > QPU turned out to have a gate time 100x l= ess=20 >> than was estimated in 2022, and so it might be possible to make do with= =20 >> even fewer qubits still within that timeframe. With > only 360 qubits,= =20 >> barring algorithmic overhead such as for circuit memory, it might be=20 >> possible to decrypt a single address within a year. That might sound lik= e a=20 >> lot, but > being able to accomplish that at all would be significant,=20 >> almost like a Chicago Pile moment, proving something in practice that wa= s=20 >> previously only thought theoretically > possible for the past 3 decades.= =20 >> And it's only downhill from there... >> >> Briefly surveying the paper "The impact of hardware specifications on=20 >> reaching quantum advantage in the fault tolerant regime", I think it's a= =20 >> reasonble framework to evaluate >> the practical efficiency of quantum attacks on bitcoin, it's self=20 >> consistent and there is a critical approach referencing the usual=20 >> litterature on quantum attacks on bitcoin. Just >> note the caveat, one can find in usual quantum complexity litterature,= =20 >> "particularly in regard to end-to-end physical resource estimation. Ther= e=20 >> are many other error correction >> techniques available, and the best choice will likely depend on the=20 >> underlying architecture's characteristics, such as the available physica= l=20 >> qubit=E2=80=93qubit connectivity" (verbatim). Namely, evaluating quantum= attacks is=20 >> very dependent on the concrete physical architecture underpinning it. >> >> All that said, I agree with you that if you see a quantum computer with= =20 >> the range of 1000 physical qubits being able to break the DLP for ECC ba= sed=20 >> encryption like secp256k1, even if it takes a year it will be a Chicago= =20 >> Pile moment, or whatever comparative experiments which were happening ab= out=20 >> chain of nuclear reactions in 30s / 40s. >> >> > I think it's time to revisit these discussions given IBM's progress.= =20 >> They've published a two videos in particular that are worth watching; th= eir=20 >> keynote from December of last > year [2], and their roadmap update from= =20 >> just last month [3] >> >> I have looked on the roadmap as it's available on the IBM blog post:=20 >> https://www.ibm.com/quantum/blog/quantum-roadmap-2033#mark-roadmap-out-t= o-2033 >> They give only a target of 2000 logical qubit to be reach in 2033...whic= h=20 >> is surprisingly not that strong...And one expect they might hit likely s= olid >> state issues in laying out in hardware the Heron processor architecture.= =20 >> As a point of thinking, it took like 2 decades to advance on the state o= f=20 >> art >> of litography in traditional chips manufacturing. >> =20 >> So I think it's good to stay cool minded and I think my observation abou= t=20 >> talking of "super-exponential rate" as used in maaku old blog post does = not >> hold a lot of rigor to describe the advances in the field of quantum=20 >> computing. Note, also how IMB is a commercial entity that can have a lot= of=20 >> interests >> in "pumping" the state of "quantum computing" to gather fundings (there= =20 >> is a historical anecdote among bitcoin OG circles about Vitalik trying t= o=20 >> do an >> ICO to build a quantum computer like 10 years ago, just to remember). >> >> > I'm supportive of this consideration. FALCON might be a good=20 >> substitute, and maybe it can be upgraded to HAWK for even better=20 >> performance depending on how much > time there is. According to the BIP,= =20 >> FALCON signatures are ~10x larger t> han Schnorr signatures, so this wil= l=20 >> of course make the transaction more expensive, but we also > must rememb= er,=20 >> these signatures will be going into the witness, which already receives = a=20 >> 4x discount. Perhaps the discount could be incr> eased further someday t= o=20 >> fit > more transactions into blocks, but this will also likely result in= =20 >> more inscriptions filling unused space also, which permanently increases= =20 >> the burden of running an archive > node. Due to the controversy s> uch a= =20 >> change could bring, I would rather any increases in the witness discount= be=20 >> excluded from future activation discussions, so as to be > considered=20 >> separately, even if it pertains to an increase in P2QRH transaction size= . >> =20 >> > Do you think it's worth reworking the BIP to use FALCON signatures?=20 >> I've only done a deep dive into SQIsign and SPHINCS+, and I will=20 >> acknowledge the readiness levels between those two are presently worlds= =20 >> apart. >> >> I think FALCON is what has the smallest pubkey + sig size for=20 >> hash-and-sign lattice-based schemes. So I think it's worth reworking the= =20 >> BIP to see what has the smallest generation / validation time and pubkey= +=20 >> size space for the main post-quantum scheme. At least for dilthium, falc= on,=20 >> sphincs+ and SQISign. For an hypothetical witness discount, a v2 P2QRH= =20 >> could be always be moved in a very template annex tag / field. >> >> > Also, do you think it's of any concern to use HASH160 instead of=20 >> HASH256 in the output script? I think it's fine for a cryptographic=20 >> commitment since it's simply a hash of a hash (MD160 of SHA-256). >> >> See literature on quantum attacks on bitcoin in the reference of the=20 >> paper you quote ("The impact of hardware specifications on reaching quan= tum=20 >> advantage in the fault tolerant regime") for a discussion on Grover's=20 >> search algorithm. >> >> > I'm not sure I fully understand this, but even more practically, as=20 >> mentioned in the BIP, value can simply be kept in P2WPKH outputs, ideall= y=20 >> with a value of fewer than 50 >> > coins per address, and when funds ever need to be spent, the>=20 >> transaction is signed and submitted out of band to a trusted mining poo= l,=20 >> ideally one that does KYC, so it's >> > known which individual miners get to see the public key before it's=20 >> mined. It's not perfect, since this relies on exogenou> s security=20 >> assumptions, which is why P2QRH is >> > proposed. >> >> Again, the paper you're referencing ("The impact of hardware=20 >> specifications on reaching quantum advantage...") is analyzing the=20 >> performance of quantum advantage under >> 2 dimensions, namely space and time. My observation is in Bitcoin we hav= e=20 >> an additional dimension, "coin scarcity" that can be leveraged to build= =20 >> defense of address >> spends in face of quantum attacks. >> >> Namely you can introduce an artifical "witness-stack size scale ladder"= =20 >> in pseudo-bitcoin script: OP_SIZE <1000> OP_EQUALVERIFY OP_DROP=20 >> ...checksig... >> I have not verified it works well on bitcoin core though this script=20 >> should put the burden on the quantum attacker to have enough bitcoin amo= unt=20 >> available to burn in on-chain fees in witness size to break a P2WPKH. >> >> >> > ideally with a value of fewer than 50 coins per address, and when=20 >> funds ever need to be spent, the transaction is signed and submitted out= of=20 >> band to a trusted mining pool, ideally >> > one that does KYC, so it's known which individual > miners get to see= =20 >> the public key before it's mined. It's not perfect, since this relies on= =20 >> exogenous security assumptions, which is >> > why P2QRH is proposed. >> >> The technical issue if you implement KYC for a mining pool you're=20 >> increasing your DoS surface and this could be exploited by competing=20 >> miners. A more reasonable security model can be to have miner coinbase= =20 >> pubkeys being used to commit to the "seen-in-mempool" spends and from th= en=20 >> build "hand wawy" fraud proofs that a miner is quantum attacking you're= =20 >> P2WSH spends at pubkey reveal time during transaction relay. >> >> Best, >> Antoine >> >> ots hash: 1ad818955bbf0c5468847c00c2974ddb5cf609d630523622bfdb27f1f0dc0b= 30 >> Le lundi 17 juin 2024 =C3=A0 23:25:25 UTC+1, hunter a =C3=A9crit : >> >> >> -----BEGIN PGP SIGNED MESSAGE-----=20 >> Hash: SHA256=20 >> >> On 2024-06-16 19:31, Antoine Riard wrote:=20 >> >> >=20 >> > Hi Hunter Beast,I think any post-quantum upgrade signature algorithm= =20 >> upgrade proposal would grandly benefit to haveShor's based practical=20 >> attacks far more defined in the Bitcoin context. As soon you start to ta= lk=20 >> aboutquantum computers there is no such thing as a "quantum computer"=20 >> though a wide array of architecturesbased on a range of technologies to= =20 >> encode qubits on nanoscale physical properties.=20 >> >=20 >> Good point. I can write a section in the BIP Motivation or Security=20 >> section about how an attack might take place practically, and the potent= ial=20 >> urgency of such an attack.=20 >> =20 >> I was thinking of focusing on the IBM Quantum System Two, mention how it= =20 >> can be scaled, and that although it might be quite limited, if running= =20 >> Shor's variant for a sufficient amount of time, above a certain minimum= =20 >> threshold of qubits, it might be capable of decrypting the key to an=20 >> address within one year. I base this on the estimate provided in a study= by=20 >> the Sussex Centre for Quantum Technologies, et. al [1]. They provide two= =20 >> figures, 317M qubits to decrypt in one hour, 13M qubits to decrypt in on= e=20 >> day. It would seem it scales roughly linearly, and so extrapolating it= =20 >> further, 36,000 qubits would be needed to decrypt an address within one= =20 >> year. However, the IBM Heron QPU turned out to have a gate time 100x les= s=20 >> than was estimated in 2022, and so it might be possible to make do with= =20 >> even fewer qubits still within that timeframe. With only 360 qubits,=20 >> barring algorithmic overhead such as for circuit memory, it might be=20 >> possible to decrypt a single address within a year. That might sound lik= e a=20 >> lot, but being able to accomplish that at all would be significant, almo= st=20 >> like a Chicago Pile moment, proving something in practice that was=20 >> previously only thought theoretically possible for the past 3 decades. A= nd=20 >> it's only downhill from there...=20 >> >=20 >> > This is not certain that any Shor's algorithm variant works smoothly= =20 >> independently of the quantum computerarchitecture considered (e.g gate= =20 >> frequency, gate infidelity, cooling energy consumption) and I think it's= an=20 >> interesting open game-theory problem if you can concentrate a sufficiant= =20 >> amount of energy before anycoin owner moves them in consequence (e.g see= ing=20 >> a quantum break in the mempool and reacting with a counter-spend).=20 >> >=20 >> It should be noted that P2PK keys still hold millions of bitcoin, and=20 >> those encode the entire public key for everyone to see for all time. Thu= s,=20 >> early QC attacks won't need to consider the complexities of the mempool.= =20 >> >=20 >> > In my opinion, one of the last time the subject was addressed on the= =20 >> mailing list, the description of the state of the quantum computer field= =20 >> was not realistic and get into risk characterization hyperbole talking= =20 >> about "super-exponential rate" (when indeed there is no empirical=20 >> realization that distinct theoretical advance on quantum capabilities ca= n=20 >> be combined with each other) [1].=20 >> >=20 >> I think it's time to revisit these discussions given IBM's progress.=20 >> They've published a two videos in particular that are worth watching; th= eir=20 >> keynote from December of last year [2], and their roadmap update from ju= st=20 >> last month [3].=20 >> >=20 >> > On your proposal, there is an immediate observation which comes to=20 >> mind, namely why not using one of the algorithm(dilthium, sphincs+, falc= on)=20 >> which has been through the 3 rounds of NIST cryptanalysis. Apart of the= =20 >> signature size,which sounds to be smaller, in a network of full-nodes an= y=20 >> PQ signature algorithm should have reasonable verificationperformances.= =20 >> >=20 >> I'm supportive of this consideration. FALCON might be a good substitute,= =20 >> and maybe it can be upgraded to HAWK for even better performance dependi= ng=20 >> on how much time there is. According to the BIP, FALCON signatures are ~= 10x=20 >> larger than Schnorr signatures, so this will of course make the transact= ion=20 >> more expensive, but we also must remember, these signatures will be goin= g=20 >> into the witness, which already receives a 4x discount. Perhaps the=20 >> discount could be increased further someday to fit more transactions int= o=20 >> blocks, but this will also likely result in more inscriptions filling=20 >> unused space also, which permanently increases the burden of running an= =20 >> archive node. Due to the controversy such a change could bring, I would= =20 >> rather any increases in the witness discount be excluded from future=20 >> activation discussions, so as to be considered separately, even if it=20 >> pertains to an increase in P2QRH transaction size.=20 >> =20 >> Do you think it's worth reworking the BIP to use FALCON signatures? I've= =20 >> only done a deep dive into SQIsign and SPHINCS+, and I will acknowledge = the=20 >> readiness levels between those two are presently worlds apart.=20 >> =20 >> Also, do you think it's of any concern to use HASH160 instead of HASH256= =20 >> in the output script? I think it's fine for a cryptographic commitment= =20 >> since it's simply a hash of a hash (MD160 of SHA-256).=20 >> >=20 >> > Lastly, there is a practical defensive technique that can be=20 >> implemented today by coin owners to protect in face ofhyptothetical quan= tum=20 >> adversaries. Namely setting spending scripts to request an artificially= =20 >> inflated witness stack,as the cost has to be burden by the spender. I th= ink=20 >> one can easily do that with OP_DUP and OP_GREATERTHAN and a bitof stack= =20 >> shuffling. While the efficiency of this technique is limited by the max= =20 >> consensus size of the script stack(`MAX_STACK_SIZE`) and the max consens= us=20 >> size of stack element (`MAX_SCRIPT_ELEMENT_SIZE`), this adds an=20 >> additional"scarce coins" pre-requirement on the quantum adversarise to= =20 >> succeed. Shor's algorithm is only defined under theclassic ressources of= =20 >> computational complexity, time and space.=20 >> >=20 >> I'm not sure I fully understand this, but even more practically, as=20 >> mentioned in the BIP, value can simply be kept in P2WPKH outputs, ideall= y=20 >> with a value of fewer than 50 coins per address, and when funds ever nee= d=20 >> to be spent, the transaction is signed and submitted out of band to a=20 >> trusted mining pool, ideally one that does KYC, so it's known which=20 >> individual miners get to see the public key before it's mined. It's not= =20 >> perfect, since this relies on exogenous security assumptions, which is w= hy=20 >> P2QRH is proposed.=20 >> >=20 >> > Best,Antoine=20 >> > [1] https://freicoin.substack.com/p/why-im-against-taproot=20 >> >=20 >> =20 >> I'm grateful you took the time to review the BIP and offer your detailed= =20 >> insights.=20 >> =20 >> [1] =E2=80=9CThe impact of hardware specifications on reaching quantum a= dvantage=20 >> in the fault tolerant regime,=E2=80=9D 2022 -=20 >> https://pubs.aip.org/avs/aqs/article/4/1/013801/2835275/The-impact-of-ha= rdware-specifications-on-reaching=20 >> [2] https://www.youtube.com/watch?v=3DDe2IlWji8Ck=20 >> [3] https://www.youtube.com/watch?v=3Dd5aIx79OTps=20 >> =20 >> >=20 >> >=20 >> > Le vendredi 14 juin 2024 =C3=A0 15:30:54 UTC+1, Hunter Beast a =C3=A9c= rit :=20 >> >=20 >> > > Good points. I like your suggestion for a SPHINCS+, just due to how= =20 >> mature it is in comparison to SQIsign. It's already in its third round a= nd=20 >> has several standards-compliant implementations, and it has an actual=20 >> specification rather than just a research paper. One thing to consider i= s=20 >> that NIST-I round 3 signatures are 982 bytes in size, according to what = I=20 >> was able to find in the documents hosted by the SPHINCS website.=20 >> > >=20 >> https://web.archive.org/web/20230711000109if_/http://sphincs.org/data/sp= hincs+-round3-submission-nist.zip=20 >> > > =20 >> > > One way to handle this is to introduce this as a separate address=20 >> type than SQIsign. That won't require OP_CAT, and I do want to keep this= =20 >> soft fork limited in scope. If SQIsign does become significantly broken,= in=20 >> this hopefully far future scenario, I might be supportive of an increase= in=20 >> the witness discount.=20 >> > > =20 >> > > Also, I've made some additional changes based on your feedback on X.= =20 >> You can review them here if you so wish:=20 >> > >=20 >> https://github.com/cryptoquick/bips/pull/5/files?short_path=3D917a32a#di= ff-917a32a71b69bf62d7c85dfb13d520a0340a30a2889b015b82d36411ed45e754=20 >> > >=20 >> > >=20 >> > > On Friday, June 14, 2024 at 8:15:29=E2=80=AFAM UTC-6 Pierre-Luc=20 >> Dallaire-Demers wrote:=20 >> > > > SQIsign is blockchain friendly but also very new, I would recommen= d=20 >> adding a hash-based backup key in case an attack on SQIsign is found in = the=20 >> future (recall that SIDH broke over the span of a weekend=20 >> https://eprint.iacr.org/2022/975.pdf).=20 >> > > > Backup keys can be added in the form of a Merkle tree where one=20 >> branch would contain the SQIsign public key and the other the public key= of=20 >> the recovery hash-based scheme. For most transactions it would only add = one=20 >> bit to specify the SQIsign branch.=20 >> > > > The hash-based method could be Sphincs+, which is standardized by= =20 >> NIST but requires adding extra code, or Lamport, which is not standardiz= ed=20 >> but can be verified on-chain with OP-CAT.=20 >> > > >=20 >> > > > On Sunday, June 9, 2024 at 12:07:16=E2=80=AFp.m. UTC-4 Hunter Beas= t wrote:=20 >> > > > > The motivation for this BIP is to provide a concrete proposal fo= r=20 >> adding quantum resistance to Bitcoin. We will need to pick a signature= =20 >> algorithm, implement it, and have it ready in event of quantum emergency= .=20 >> There will be time to adopt it. Importantly, this first step is a more= =20 >> substantive answer to those with concerns beyond, "quantum computers may= =20 >> pose a threat, but we likely don't have to worry about that for a long= =20 >> time". Bitcoin development and activation is slow, so it's important tha= t=20 >> those with low time preference start discussing this as a serious=20 >> possibility sooner rather than later. This is meant to be the first in a= =20 >> series of BIPs regarding a hypothetical "QuBit" soft fork. The BIP is=20 >> intended to propose concrete solutions, even if they're early and=20 >> incomplete, so that Bitcoin developers are aware of the existence of the= se=20 >> solutions and their potential. This is just a rough draft and not the=20 >> finished BIP. I'd like to validate the approach and hear if I should=20 >> continue working on it, whether serious changes are needed, or if this= =20 >> truly isn't a worthwhile endeavor right now.=20 >> > > > > =20 >> > > > > The BIP can be found here:=20 >> > > > >=20 >> https://github.com/cryptoquick/bips/blob/p2qrh/bip-p2qrh.mediawiki=20 >> > > > > =20 >> > > > > Thank you for your time.=20 >> > > > > =20 >> > > > >=20 >> > > >=20 >> > > >=20 >> > >=20 >> > >=20 >> >=20 >> >=20 >> > -- You received this message because you are subscribed to a topic in= =20 >> the Google Groups "Bitcoin Development Mailing List" group. To unsubscri= be=20 >> from this topic, visit=20 >> https://groups.google.com/d/topic/bitcoindev/Aee8xKuIC2s/unsubscribe. To= =20 >> unsubscribe from this group and all its topics, send an email to=20 >> bitcoindev+...@googlegroups.com. To view this discussion on the web=20 >> visit=20 >> https://groups.google.com/d/msgid/bitcoindev/87b4e402-39d8-46b0-8269-4f8= 1fa501627n%40googlegroups.com.=20 >> >> >> -----BEGIN PGP SIGNATURE-----=20 >> Version: OpenPGP.js v4.10.3=20 >> Comment: https://openpgpjs.org=20 >> >> wsBcBAEBCAAGBQJmcJwuAAoJEDEPCKe+At0hjhkIAIdM7QN9hAO0z+KO7Bwe=20 >> JT45XyusJmDG1gJbLZtb+SfuE1X5PFDHNTLSNliJWsOImxFCiBPnlXhYQ4B/=20 >> 8gST3rqplUwkdYr52E5uMxTTq9YaXTako4PNb8d7XfraIwDKXAJF+5Skf4f9=20 >> bQUYMieBAFSEXCmluirQymB+hUoaze60Whd07hhpzbGSwK4DdSXltufkyCDE=20 >> tJUforNWm8X25ABTSNDh3+if5V/wJuix/u8GJyMHKucaEAO01ki2oyusq2rt=20 >> Xe6ysUieclusFFdQAs4PfYxhzXTf5XeAbFga/qxrVtbt7q2nUkYklqteT2pp=20 >> mH/DU20HMBeGVSrISrvsmLw=3D=20 >> =3D+wat=20 >> -----END PGP SIGNATURE-----=20 >> >> --=20 You received this message because you are subscribed to the Google Groups "= Bitcoin Development Mailing List" group. To unsubscribe from this group and stop receiving emails from it, send an e= mail to bitcoindev+unsubscribe@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/= bitcoindev/264e0340-ddfa-411c-a755-948399400b08n%40googlegroups.com. ------=_Part_34113_1252241700.1724307620289 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hello Hunter,

> Well, it's also important to remember that fo= r every qubit added, it doubles the power of the system. A 2,000 qubit cryp= tographically-relevant quantum computer (CRQC) is exponentially faster than= a 1,000 qubit one. There's also the > capability for cross-links for mu= ltiple chips to communicate with each other, which IBM is also researching.= The IBM Quantum System Two can be upgraded to support 16,000 qubits accord= ing to their marketing. Also consider that the ve> rification of the res= ults from the CRQC can be done via classical computer, so a high level of e= rror correction might not be as necessary so long as the program is run eno= ugh times. It will take much longer, of course.

On performance, = once again I think it all depends on the quantum computer architecture cons= idered and if we're talking about physical qubits / logical qubits. As the = paper "The impact of hardware specifications on reaching quantum advantage = in the fault tolerant regime" linked in your BIP judiciously observe in its= introduction that surface code (as used by IBM) is only one of the error c= ode correction technique.

About cross-links for multiple chips, = even if each chip parallelize towards a single classical logical unit, orde= ring computational units is a notoriously hard issue in classical computer.= I don't think there is any certainty in quantum computer development that = each set of qubits of isolated chips can be arithmetically additioned witho= ut a coefficient loss on the resulting sum (...there is always a bit of app= rehension to have to dissociate between marketing claims and academic claim= duly peer-reviewed...). And while indeed, the results can be evaluated via= a classical computer, this doesn't mean transitively that the evaluation w= ill be as efficient (in energy / computational cycles) rather than doing mo= re error correction on the quantum computer side.

> I've deci= ded in one of my more recent updates to the BIP to default to the highest l= evel of NIST security, NIST V, which provides 256 bits of security. You can= see my rationale for that in this PR:
> https://github.com/cryptoq= uick/bips/pull/7/files

Those are assumptions there is a security= increase by scaling up the size of the public key. In the Bitcoin world, w= e don't even make assumption on the public key size
for ECDSA signatur= e scheme as both compressed and uncompressed public keys have been historic= ally valid. Similarly, the public key size does not have to be bundled with=
the specification of the signature verification scheme itself (e.g se= e BIP340 discussion on x-only public keys).

> As such, you'll= see FALCON is roughly 4x larger than SQIsign signatures. Although supersin= gular elliptic curve quaternion isogeny-based algorithms are newer and
> more experimental than lattice-based cryptography, I think the benefi= ts outweigh the risks, especially when transaction throughput is a principa= l concern.
=C2=A0
There are no public key size in the security ta= ble so it's hard to compare the overall on-chain space cost for each signat= ure post-quantum algorithm considered.
Neither actually, there is an e= stimation of the verification cost for an average 200-bytes transactions, o= ld good's Hamilton's quaternion and relying on complex numbers, which can b= e hard to deal with for the hobbyist CPUs can be a concern.

>= It's crucial that the signature and public key both receive the witness di= scount. Can you go into more detail in how that might be accomplished?

The BIP341 taproot annex could be used for that, see https://github.= com/bitcoin/bips/blob/master/bip-0341.mediawiki#cite_note-5

>= Although it's too early to talk about activation of a QuBit soft fork, I'v= e put some thought into how we can maintain the existing Bitcoin throughput= with a soft fork, and I think it might be prudent to, when the time comes,= introdu> ce a 4x additional QuBit witness discount, maybe we call it th= e quitness, which is only available to valid P2QRH signatures. This would p= reclude its abuse for things like inscriptions because the signature data w= ould need to corresp> ond to the key, and even if this were possible, it= 's likely to result in only a burner address. This would increase chain sta= te growth from roughly 100GB/yr to possibly closer to 2-300GB, depending on= adoption. As the state of the a> rt of SSD technology advances, this sh= ould allow plebs to run their own node on a 4TB disk for over a decade, eve= n including existing chain size of ~600GB.

The annex could have = typed fields for post-quantum signature and public key further witness disc= ount. However, I think it's a bit naive to assume that SSD technology advan= ces will stay linear and that it will be economically accessible at the sam= e pace to the tens of thousands of plebs actually running full-nodes and co= nstituting the skeleton of the base-relay network. One could play out a pos= teriori the predictions on bandwidth technological advances that have been = made in BIP103 to see how well they held on the last ~9 years.

(= There is another caution with evaluating technological advances, namely tha= t some hardware components could be actually massively consumed by other cr= yptocurrencies for their consensus algorithms...)

> If we wer= e to use the same approach for FALCON signatures, a 16x discount would be n= eeded, and I think that's far too much for the community to accept. As for = pub key size and verification
> time, these are secondary considera= tions if the primary constraint is maintaining present transaction throughp= ut. That's what makes SQIsign so promising.

Well, if there is so= mething like the annex with typed fields each type of post-quantum signatur= e could get a wider discount, especially if there are verification asymmetr= ies favoring some scheme over another one, even if the security properties = are differing.

> The Impact paper seems to dismiss Grover's a= lgorithm, but I think it's important to err on the size of caution and inst= ead use a 32-byte double SHA-2 (HASH256) for additional security in the P2Q= RH output.

Performance-wise, this doesn't shock me to use a doub= le SHA-2 (HASH256) as it has been added for many domain separation tagged h= ash in taproot.
About Grover's algorithm, it's more the sample space a= nd collision space that should be more defined to be relevant, you can alwa= ys downgrade the performance of the Grover's algorithm by scaling up the sa= mple space, however it's not sure it's practical for bitcoin transaction ge= neration.

> I'm not sure I understand what you mean by this..= .
> Is your coin scarcity comment related to what I call "satoshi's= shield" in the BIP?

Not at all the "satoshi's shield" as you're= describing in the BIP.

This is just the observation that bitcoi= n coins are scarce in the sense that you need to burn raw energy to acquire= the rewards according to the issuance schedule (or miners fees). Bitcoin s= cript can be designed to request that a sufficient number of bitcoin coins,= or satoshis, are burned before to unlock a coin locked under a quantum-fra= il scriptpubkey.

That means any quantum computer attacker, even = if they have an efficient quantum computer, might not be able to break the = redeem script itself, only the signatures composing the redeem script check= sig operations.

Let's give a concrete example, let's say you ha= ve the following pseudo script:

=C2=A0 =C2=A0 =C2=A0 =C2=A0 <= <OP_DEPTH> <OP_PUSHDATA2> <998> <OP_EQUALVERIFY> &l= t;pubkey> <OP_CHECKSIG>>

Interpeted the following sc= ript should request from the spending party, whatever it is to provide a wi= tness stack of length 998 bytes, all dummy elements.
Those dummy eleme= nts are putting the burden on the quantum computer attacker to burn fees at= the current sat per vbyte rate to realize a quantum exploit.
(There c= ould leverage SIGHASH_NONE to escape this "fee jail"... however it sounds t= o expose them to be overrided by a miner).

So assuming this defe= nsive scheme in face of quantum exploit is sound, I think this put the burd= en of a quantum attacker to have hashrate capabilities at the current level= of difficulty, not solely an efficient CRQC.

> Yes, this mak= es more sense. I'm not sure anything can be done with the fraud proofs, but= they could at least prove that a bad actor is present. Ideally both approa= ches are combined for maximum security and accountability.

No KY= C is necessarily hurting mining pools as there is no single kyc definition = that you can implement that do not open the door for a kind of DoS exploita= tion.

This is not an issue to build a practical fraud proofs sys= tems on seen transaction, the open question is more if the average bitcoin = user would pay to download fraud proofs demonstrating that a given miner is= not engaging in quantum exploit.

> I've taken Antoine's feed= back to heart and added FALCON to the specification, including a section th= at addresses the increased maintenance burden of adding two distinct post-q= uantum cryptosystems.

Thanks you for the addition, for the maint= enance burden there is always the counter-argument to be made that you can = secure a coins under multiple post-quantun signature scheme, especially if = they're from different hardness assumptions breed. If one of the two scheme= is secure, the coins are still locked by the other half.

I thin= k it could be interesting to split the BIP in multiple ones, one for the ge= neral consensus mechanism introducing a P2QRH with all quantum risks consid= erations, and an individual one for each signature algorithm that could be = deployed udner this generic P2QRH. Kinda in the same way, that BIP340 / BIP= 341 are split.

Best,
Antoine
ots hash: b57e9fe0b3de603= ca66be29b7f1ba04fa5b8bc516c1277114ab42ac9f8572e12

Le jeudi 15 ao=C3=BBt 2= 024 =C3=A0 06:25:01 UTC+1, Hunter Beast a =C3=A9crit=C2=A0:
I've taken Antoine's= feedback to heart and added FALCON to the specification, including a secti= on that addresses the increased maintenance burden of adding two distinct p= ost-quantum cryptosystems.
Please review.
On Tuesday, August 6, 2024 at 11:50:35= =E2=80=AFAM UTC-6 Hunter Beast wrote:
That's alright, Antoine, it's been a busy month for= me too.

> So I think it's good to stay cool mind= ed and I think my observation about talking of "super-exponential rate= " as used in maaku old blog post does not
> hold a lot of rigor = to describe the advances in the field of quantum computing. Note, also how = IMB is a commercial entity that can have a lot of interests
> in &quo= t;pumping" the state of "quantum computing" to gather fundin= gs (there is a historical anecdote among bitcoin OG circles about Vitalik t= rying to do an
> ICO to build a quantum computer like 10 years ago, j= ust to remember).

Well, it's also important to= remember that for every qubit added, it doubles the power of the system. A= 2,000 qubit cryptographically-relevant quantum computer (CRQC) is exponent= ially faster than a 1,000 qubit one. There's also the capability for cr= oss-links for multiple chips to communicate with each other, which IBM is a= lso researching. The IBM Quantum System Two can be upgraded to support 16,0= 00 qubits according to their marketing. Also consider that the verification= of the results from the CRQC can be done via classical computer, so a high= level of error correction might not be as necessary so long as the program= is run enough times. It will take much longer, of course.

> I think FALCON is what has the smallest pubkey + sig size for= hash-and-sign lattice-based schemes. So I think it's worth reworking t= he BIP to see what has the smallest generation / validation time and pubkey= + size space for the main post-quantum scheme. At least for dilthium, falc= on, sphincs+ and SQISign. For an hypothetical witness discount, a v2 P2QRH = could be always be moved in a very template annex tag / field.
I've decided in one of my more recent updates to the BIP t= o default to the highest level of NIST security, NIST V, which provides 256= bits of security. You can see my rationale for that in this PR:
Then, refer= encing this table:
As such, you'll see FALCON is roughly 4x lar= ger than SQIsign signatures. Although supersingular elliptic curve quaterni= on isogeny-based algorithms are newer and more experimental than lattice-ba= sed cryptography, I think the benefits outweigh the risks, especially when = transaction throughput is a principal concern.

It&= #39;s crucial that the signature and public key both receive the witness di= scount. Can you go into more detail in how that might be accomplished?

Although it's too early to talk about activation o= f a QuBit soft fork, I've put some thought into how we can maintain the= existing Bitcoin throughput with a soft fork, and I think it might be prud= ent to, when the time comes, introduce a 4x additional QuBit witness discou= nt, maybe we call it the quitness, which is only available to valid P2QRH s= ignatures. This would preclude its abuse for things like inscriptions becau= se the signature data would need to correspond to the key, and even if this= were possible, it's likely to result in only a burner address. This wo= uld increase chain state growth from roughly 100GB/yr to possibly closer to= 2-300GB, depending on adoption. As the state of the art of SSD technology = advances, this should allow plebs to run their own node on a 4TB disk for o= ver a decade, even including existing chain size of ~600GB.

<= /div>
If we were to use the same approach for FALCON signatures, a 16x = discount would be needed, and I think that's far too much for the commu= nity to accept. As for pub key size and verification time, these are second= ary considerations if the primary constraint is maintaining present transac= tion throughput. That's what makes SQIsign so promising.

=
> See literature on quantum attacks on bitcoin in the referen= ce of the paper you quote ("The impact of hardware specifications on r= eaching quantum advantage in the fault tolerant regime") for a discuss= ion on Grover's search algorithm.

The Impact p= aper seems to dismiss Grover's algorithm, but I think it's importan= t to err on the size of caution and instead use a 32-byte double SHA-2 (HAS= H256) for additional security in the P2QRH output.

> Namely you can introduce an artifical "witness-stack size scale = ladder" in pseudo-bitcoin script: OP_SIZE <1000> OP_EQUALVERIFY = OP_DROP ...checksig...
> I have not verified it works well on = bitcoin core though this script should put the burden on the quantum attack= er to have enough bitcoin amount available to burn in on-chain fees in witn= ess size to break a P2WPKH.

I'm not sure I und= erstand what you mean by this...
Is your coin scarcity comment re= lated to what I call "satoshi's shield" in the BIP?

> The technical issue if you implement KYC for a mining = pool you're increasing your DoS surface and this could be exploited by = competing miners. A more reasonable security model can be to have miner coi= nbase pubkeys being used to commit to the "seen-in-mempool" spend= s and from then build "hand wawy" fraud proofs that a miner is qu= antum attacking you're P2WSH spends at pubkey reveal time during transa= ction relay.

Yes, this makes more sense. I'm n= ot sure anything can be done with the fraud proofs, but they could at least= prove that a bad actor is present. Ideally both approaches are combined fo= r maximum security and accountability.

Thanks for = your time!

On Friday, July = 12, 2024 at 7:44:27=E2=80=AFPM UTC-6 Antoine Riard wrote:
Hi Hunter Beast,

Apologies for the delay in answ= er.

> I was thinking of focusing on the IBM Quantum System Two, m= ention how it can be scaled, and that although it might be quite limited, i= f running Shor's variant for a > sufficient amount of time, above a = certain minimum threshold of qubits, it might be capable of decrypting the = key to an address within one year. I base this on the estimate > provide= d in a study by the Sussex Centre for Quantum Technologies, et. al [1]. The= y provide two figures, 317M qubits to decrypt in one hour, 13M qubits to de= crypt in one > day. It would seem it scales roughly linearly, and so ext= rapolating it further, 36,000 qubits would be needed to decrypt an address = within one year. However, the IBM Heron > QPU turned out to have a gate = time 100x less than was estimated in 2022, and so it might be possible to m= ake do with even fewer qubits still within that timeframe. With > only 3= 60 qubits, barring algorithmic overhead such as for circuit memory, it migh= t be possible to decrypt a single address within a year. That might sound l= ike a lot, but > being able to accomplish that at all would be significa= nt, almost like a Chicago Pile moment, proving something in practice that w= as previously only thought theoretically > possible for the past 3 decad= es. And it's only downhill from there...

Briefly surveying the p= aper "The impact of hardware specifications on reaching quantum advant= age in the fault tolerant regime", I think it's a reasonble framew= ork to evaluate
the practical efficiency of quantum attacks on bitcoin, = it's self consistent and there is a critical approach referencing the u= sual litterature on quantum attacks on bitcoin. Just
note the caveat, on= e can find in usual quantum complexity litterature, "particularly in r= egard to end-to-end physical resource estimation. There are many other erro= r correction
techniques available, and the best choice will likely depen= d on the underlying architecture's characteristics, such as the availab= le physical qubit=E2=80=93qubit connectivity" (verbatim). Namely, eval= uating quantum attacks is very dependent on the concrete physical architect= ure underpinning it.

All that said, I agree with you that if you see= a quantum computer with the range of 1000 physical qubits being able to br= eak the DLP for ECC based encryption like secp256k1, even if it takes a yea= r it will be a Chicago Pile moment, or whatever comparative experiments whi= ch were happening about chain of nuclear reactions in 30s / 40s.

>= ; =C2=A0I think it's time to revisit these discussions given IBM's = progress. They've published a two videos in particular that are worth w= atching; their keynote from December of last > year [2], and their roadm= ap update from just last month [3]

I have looked on the roadmap as i= t's available on the IBM blog post: https://www.ibm.com/quantum/blog/quantum-roadmap-2= 033#mark-roadmap-out-to-2033
They give only a target of 2000 logical= qubit to be reach in 2033...which is surprisingly not that strong...And on= e expect they might hit likely solid
state issues in laying out in hardw= are the Heron processor architecture. As a point of thinking, it took like = 2 decades to advance on the state of art
of litography in traditional ch= ips manufacturing.
=C2=A0
So I think it's good to stay cool minde= d and I think my observation about talking of "super-exponential rate&= quot; as used in maaku old blog post does not
hold a lot of rigor to des= cribe the advances in the field of quantum computing. Note, also how IMB is= a commercial entity that can have a lot of interests
in "pumping&q= uot; the state of "quantum computing" to gather fundings (there i= s a historical anecdote among bitcoin OG circles about Vitalik trying to do= an
ICO to build a quantum computer like 10 years ago, just to remember)= .

> I'm supportive of this consideration. FALCON might be a g= ood substitute, and maybe it can be upgraded to HAWK for even better perfor= mance depending on how much > time there is. According to the BIP, FALCO= N signatures are ~10x larger t> han Schnorr signatures, so this will of = course make the transaction more expensive, but we also > must remember,= these signatures will be going into the witness, which already receives a = 4x discount. Perhaps the discount could be incr> eased further someday t= o fit > more transactions into blocks, but this will also likely result = in more inscriptions filling unused space also, which permanently increases= the burden of running an archive > node. Due to the controversy s> u= ch a change could bring, I would rather any increases in the witness discou= nt be excluded from future activation discussions, so as to be > conside= red separately, even if it pertains to an increase in P2QRH transaction siz= e.
=C2=A0
> Do you think it's worth reworking the BIP to use F= ALCON signatures? I've only done a deep dive into SQIsign and SPHINCS+,= and I will acknowledge the readiness levels between those two are presentl= y worlds apart.

I think FALCON is what has the smallest pubkey + sig= size for hash-and-sign lattice-based schemes. So I think it's worth re= working the BIP to see what has the smallest generation / validation time a= nd pubkey + size space for the main post-quantum scheme. At least for dilth= ium, falcon, sphincs+ and SQISign. For an hypothetical witness discount, a = v2 P2QRH could be always be moved in a very template annex tag / field.
=
> Also, do you think it's of any concern to use HASH160 instead = of HASH256 in the output script? I think it's fine for a cryptographic = commitment since it's simply a hash of a hash (MD160 of SHA-256).
See literature on quantum attacks on bitcoin in the reference of the pape= r you quote ("The impact of hardware specifications on reaching quantu= m advantage in the fault tolerant regime") for a discussion on Grover&= #39;s search algorithm.

> I'm not sure I fully understand thi= s, but even more practically, as mentioned in the BIP, value can simply be = kept in P2WPKH outputs, ideally with a value of fewer than 50
> coin= s per address, and when funds ever need to be spent, the> =C2=A0transact= ion is signed and submitted out of band to a trusted mining pool, ideally o= ne that does KYC, so it's
> known which individual miners = get to see the public key before it's mined. It's not perfect, sinc= e this relies on exogenou> s security assumptions, which is why P2QRH is=
> proposed.

Again, the paper you're referencing (&= quot;The impact of hardware specifications on reaching quantum advantage...= ") is analyzing the performance of quantum advantage under
2 dimens= ions, namely space and time. My observation is in Bitcoin we have an additi= onal dimension, "coin scarcity" that can be leveraged to build de= fense of address
spends in face of quantum attacks.

Namely you ca= n introduce an artifical "witness-stack size scale ladder" in pse= udo-bitcoin script: OP_SIZE <1000> OP_EQUALVERIFY OP_DROP ...checksig= ...
I have not verified it works well on bitcoin core though this script= should put the burden on the quantum attacker to have enough bitcoin amoun= t available to burn in on-chain fees in witness size to break a P2WPKH.


> =C2=A0ideally with a value of fewer than 50 coins per a= ddress, and when funds ever need to be spent, the transaction is signed and= submitted out of band to a trusted mining pool, ideally
> one that d= oes KYC, so it's known which individual > miners get to see the publ= ic key before it's mined. It's not perfect, since this relies on ex= ogenous security assumptions, which is
> why P2QRH is proposed.
The technical issue if you implement KYC for a mining pool you= 're increasing your DoS surface and this could be exploited by competin= g miners. A more reasonable security model can be to have miner coinbase pu= bkeys being used to commit to the "seen-in-mempool" spends and fr= om then build "hand wawy" fraud proofs that a miner is quantum at= tacking you're P2WSH spends at pubkey reveal time during transaction re= lay.

Best,
Antoine

ots hash:=C2=A01ad818= 955bbf0c5468847c00c2974ddb5cf609d630523622bfdb27f1f0dc0b30
Le lundi 17 juin 2024 =C3=A0 23:25:25 UTC+1, hunter a =C3=A9cri= t=C2=A0:

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 2024-06-16 19:31, Antoine Riard <antoin...@gm= ail.com> wrote:

>
> Hi Hunter Beast,I think any post-quantum upgrade signature algorit= hm upgrade proposal would grandly benefit to haveShor's based practical= attacks far more defined in the Bitcoin context. As soon you start to talk= aboutquantum computers there is no such thing as a "quantum computer&= quot; though a wide array of architecturesbased on a range of technologies = to encode qubits on nanoscale physical properties.
>
Good point. I can write a section in the BIP Motivation or Security sec= tion about how an attack might take place practically, and the potential ur= gency of such an attack.
=C2=A0
I was thinking of focusing on the IBM Quantum System Two, mention how i= t can be scaled, and that although it might be quite limited, if running Sh= or's variant for a sufficient amount of time, above a certain minimum t= hreshold of qubits, it might be capable of decrypting the key to an address= within one year. I base this on the estimate provided in a study by the Su= ssex Centre for Quantum Technologies, et. al [1]. They provide two figures,= 317M qubits to decrypt in one hour, 13M qubits to decrypt in one day. It w= ould seem it scales roughly linearly, and so extrapolating it further, 36,0= 00 qubits would be needed to decrypt an address within one year. However, t= he IBM Heron QPU=C2=A0turned out to have a gate time 100x less than was est= imated in 2022, and so it might be possible to make do with even fewer qubi= ts still within that timeframe. With only 360 qubits, barring algorithmic o= verhead such as for circuit memory, it might be possible to=C2=A0decrypt a = single address within a year. That might sound like a lot, but being able t= o=C2=A0accomplish that=C2=A0at all would be significant, almost like a Chic= ago Pile moment, proving something in practice that was previously only tho= ught theoretically possible for the past 3 decades. And it's only downh= ill from there...
>
> This is not certain that any Shor's algorithm variant works sm= oothly independently of the quantum computerarchitecture considered (e.g ga= te frequency, gate infidelity, cooling energy consumption) and I think it&#= 39;san interesting open game-theory problem if you can concentrate a suffic= iant amount of energy before anycoin owner moves them in consequence (e.g s= eeing a quantum break in the mempool and reacting with a counter-spend).
>
It should be noted that P2PK keys still hold millions of bitcoin, and t= hose encode the entire public key for everyone to see for all time. Thus, e= arly QC attacks won't need to consider the=C2=A0complexities of the mem= pool.
>
> In my opinion, one of the last time the subject was addressed on t= he mailing list, the description of the state of the quantum computer field= was not realistic and get into risk characterization hyperbole talking abo= ut "super-exponential rate" (when indeed there is no empirical re= alization=C2=A0that distinct theoretical advance on quantum capabilities=C2= =A0can be combined with each other) [1].
>
I think it's time to revisit these discussions given IBM's prog= ress. They've published a two videos in particular that are worth watch= ing; their keynote from December of last year [2], and their roadmap update= from just last month [3].
>
> On your proposal, there is an immediate observation which comes to= mind, namely why not using one of the algorithm(dilthium, sphincs+, falcon= ) which has been through the 3 rounds of NIST cryptanalysis. Apart of the s= ignature size,which sounds to be smaller, in a network of full-nodes any PQ= signature algorithm should have reasonable verificationperformances.
>
I'm supportive of this consideration. FALCON might be a good substi= tute, and maybe it can be upgraded to HAWK for even better performance depe= nding on how much time there is. According to the BIP, FALCON signatures ar= e ~10x larger than Schnorr signatures, so this will of course make the tran= saction more expensive, but we also must remember, these signatures will be= going into the witness, which already receives a 4x discount. Perhaps the = discount could be increased further someday to fit more transactions into b= locks, but this will also likely result in more inscriptions filling unused= space also, which permanently increases the burden of running an archive n= ode. Due to the controversy such a change could bring, I would rather any i= ncreases in the witness discount be excluded from future activation discuss= ions, so as to be considered separately, even if it pertains to an increase= in P2QRH transaction size.
=C2=A0
Do you think it's worth reworking the BIP to use FALCON signatures?= I've only done a deep dive into SQIsign and SPHINCS+, and I will ackno= wledge the readiness levels between those two are presently worlds apart.
=C2=A0
Also, do you think it's of any concern to use HASH160 instead of HA= SH256 in the output script? I think it's fine for a cryptographic commi= tment since it's simply a hash of a hash (MD160 of SHA-256).
>
> Lastly, there is a practical defensive technique that can be imple= mented today by coin owners to protect in face ofhyptothetical quantum adve= rsaries. Namely setting spending scripts to request an artificially inflate= d witness stack,as the cost has to be burden by the spender. I think one ca= n easily do that with OP_DUP and OP_GREATERTHAN and a bitof stack shuffling= . While the efficiency of this technique is limited by the max consensus si= ze of the script stack(`MAX_STACK_SIZE`) and the max consensus size of stac= k element (`MAX_SCRIPT_ELEMENT_SIZE`), this adds an additional"scarce = coins" pre-requirement on the quantum adversarise to succeed. Shor'= ;s algorithm is only defined under theclassic ressources of computational c= omplexity, time and space.
>
I'm not sure I fully understand this, but even more practically, as= mentioned in the BIP, value can simply be kept in P2WPKH outputs, ideally = with a value of fewer than 50 coins per address, and when funds ever need t= o be spent, the transaction is signed and submitted out of band to a truste= d mining pool, ideally one that does KYC, so it's known which individua= l miners get to see the public key before it's mined. It's not perf= ect, since this relies on exogenous security assumptions, which is why P2QR= H is proposed.
>
> Best,Antoine
> [1]=C2=A0https://freicoin.substack.com/p/why-im-against-= taproot
>
=C2=A0
I'm grateful you took the time to review the BIP and offer your det= ailed insights.
=C2=A0
[1] =E2=80=9CThe impact of hardware specifications on reaching quantum = advantage in the fault tolerant regime,=E2=80=9D 2022=C2=A0-=C2=A0https://pubs.aip.org/avs/aqs/article/4/1/0138= 01/2835275/The-impact-of-hardware-specifications-on-reaching
[2]=C2=A0https://www.youtube.com/watch?v=3DDe2IlWji8Ck
[3]=C2=A0https://www.youtube.com/watch?v=3Dd5aIx79OTps
=C2=A0
>
>
> Le vendredi 14 juin 2024 =C3=A0 15:30:54 UTC+1, Hunter Beast a =C3= =A9crit=C2=A0:
>
> > Good points. I like your suggestion for a SPHINCS+, just due = to how mature it is in comparison to SQIsign. It's already in its third= round and has several standards-compliant implementations, and it has an a= ctual specification rather than just a research paper. One thing to conside= r is that NIST-I round 3 signatures are 982 bytes in size, according to wha= t I was able to find in the documents hosted by the SPHINCS website.
> > https://web.archive.or= g/web/20230711000109if_/http://sphincs.org/data/sphincs+-round3-submission-= nist.zip
> > =C2=A0
> > One way to handle this is to introduce this as a separate add= ress type than SQIsign. That won't require OP_CAT, and I do want to kee= p this soft fork limited in scope. If SQIsign does become significantly bro= ken, in this hopefully far future scenario, I might be supportive of an inc= rease in the witness discount.
> > =C2=A0
> > Also, I've made some additional changes based on your fee= dback on X. You can review them here if you so wish:
> > https://github.com/cryptoquic= k/bips/pull/5/files?short_path=3D917a32a#diff-917a32a71b69bf62d7c85dfb13d52= 0a0340a30a2889b015b82d36411ed45e754
> >
> >
> > On Friday, June 14, 2024 at 8:15:29=E2=80=AFAM UTC-6 Pierre-L= uc Dallaire-Demers wrote:
> > > SQIsign is blockchain friendly but also very new, I woul= d recommend adding a hash-based backup key in case an attack on SQIsign is = found in the future (recall that SIDH broke over the span of a weekend=C2= =A0https://eprint.iacr.org= /2022/975.pdf).
> > > Backup keys can be added in the form of a Merkle tree wh= ere one branch would contain the SQIsign public key and the other the publi= c key of the recovery hash-based scheme. For most transactions it would onl= y add one bit to specify the SQIsign branch.
> > > The hash-based method could be Sphincs+, which is standa= rdized by NIST but requires adding extra code, or Lamport, which is not sta= ndardized but can be verified on-chain with OP-CAT.
> > >
> > > On Sunday, June 9, 2024 at 12:07:16=E2=80=AFp.m. UTC-4 H= unter Beast wrote:
> > > > The motivation for this BIP is to provide a concret= e proposal for adding quantum resistance to Bitcoin. We will need to pick a= signature algorithm, implement it, and have it ready in event of quantum e= mergency. There will be time to adopt it. Importantly, this first step is a= more substantive answer to those with concerns beyond, "quantum compu= ters may pose a threat, but we likely don't have to worry about that fo= r a long time". Bitcoin development and activation is slow, so it'= s important that those with low time preference start discussing this as a = serious possibility sooner rather than later. This is meant to be the firs= t in a series of BIPs regarding a hypothetical "QuBit" soft fork.= The BIP is intended to propose concrete solutions, even if they're ear= ly and incomplete, so that Bitcoin developers are aware of the existence of= these solutions and their potential. This is just a rough draft and not t= he finished BIP. I'd like to validate the approach and hear if I should= continue working on it, whether serious changes are needed, or if this tru= ly isn't a worthwhile endeavor right now.
> > > > =C2=A0
> > > > The BIP can be found here:
> > > > https://github.= com/cryptoquick/bips/blob/p2qrh/bip-p2qrh.mediawiki
> > > > =C2=A0
> > > > Thank you for your time.
> > > > =C2=A0
> > > >
> > >
> > >
> >
> >
>
>
> -- You received this message because you are subscribed to a topic= in the Google Groups "Bitcoin Development Mailing List" group. T= o unsubscribe from this topic, visit https://groups.google.com/d/topic/bitcoindev/Aee8xKuIC2s/unsubscribe. = To unsubscribe from this group and all its topics, send an email to bitcoindev+...@googlegroups.com. To view this discussion = on the web visit https://groups.google.com/d= /msgid/bitcoindev/87b4e402-39d8-46b0-8269-4f81fa501627n%40googlegroups.com<= /a>.

-----BEGIN PGP SIGNATURE-----
Version: OpenPGP.js v4.10.3
Comment:
https://openpgpjs.org

wsBcBAEBCAAGBQJmcJwuAAoJEDEPCKe+At0hjhkIAIdM7QN9hAO0z+KO7Bwe
JT45XyusJmDG1gJbLZtb+SfuE1X5PFDHNTLSNliJWsOImxFCiBPnlXhYQ4B/
8gST3rqplUwkdYr52E5uMxTTq9YaXTako4PNb8d7XfraIwDKXAJF+5Skf4f9
bQUYMieBAFSEXCmluirQymB+hUoaze60Whd07hhpzbGSwK4DdSXltufkyCDE
tJUforNWm8X25ABTSNDh3+if5V/wJuix/u8GJyMHKucaEAO01ki2oyusq2rt
Xe6ysUieclusFFdQAs4PfYxhzXTf5XeAbFga/qxrVtbt7q2nUkYklqteT2pp
mH/DU20HMBeGVSrISrvsmLw=3D
=3D+wat
-----END PGP SIGNATURE-----

--
You received this message because you are subscribed to the Google Groups &= quot;Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an e= mail to bitcoind= ev+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msg= id/bitcoindev/264e0340-ddfa-411c-a755-948399400b08n%40googlegroups.com.=
------=_Part_34113_1252241700.1724307620289-- ------=_Part_34112_1313790278.1724307620289--