From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id C95B194B for ; Mon, 15 May 2017 23:31:08 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.7.6 Received: from mail-vk0-f53.google.com (mail-vk0-f53.google.com [209.85.213.53]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id F109EF0 for ; Mon, 15 May 2017 23:31:00 +0000 (UTC) Received: by mail-vk0-f53.google.com with SMTP id y190so57187316vkc.1 for ; Mon, 15 May 2017 16:31:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:sender:from:date:message-id:subject:to; bh=kB5eFD5itcnytQLFfQ8eOJE0rwQlxkykUzXX6YCYpPU=; b=DZoRv8HeVqHcNBjCZANfRVXNC3Vy//48+RrzTy7kUPdHXcZJjWeizKhMlvBvqvfJ/E pBDY6h2ptFcpREIPsJ5Dw8/3bn9mm8uEInI+fDJzvqAYq7VOcf0osMmbAo+o+tejUZng viwJImZY5Q0KUTjIFsUZqH9x962dnJN54SSZjvO1TT+8OuDOU3lZrC3QI4Wd7DZbph1w L1iil0ng5Pm3sBKPycyzE2KJA1zk4KlX2hCaR1NOaDVs71OE3lEkWjrc+k9aA1qHsuE3 yzmerTDDn6i7vRE1Kf7b1PvQlYQ5jExK4A0cxNJY15Q4vBA7C+5yiPVTya8oegYsGDVD 1Omw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:sender:from:date:message-id:subject :to; bh=kB5eFD5itcnytQLFfQ8eOJE0rwQlxkykUzXX6YCYpPU=; b=uhqFs1tGRWRtCPqI6BGx6QuBu3xyGBuxmUVV5cDMSpVg9PDGBIhykhrNnemAj6IwFk CKLh1qtnx9Muf74O8OuyKsqbBXZoEckNnGytWtD4fZpxRbshVB8k+dR/X7ABtMtJh53M hFzhq6MAFho1BWkF34XF/PdkxOqpcqLXAwB9rEfnxEZ6hbzBywogd/0JnETD0ovZjIoz KleCaMlCQgc2T/Keq3FCBxGze86+jowo5Jj9s4cr4hOnKNJa+ycen4G24uFVKzABIdjE XxGnuSdc9xTiZAtKmgY+W4KRB7lYkh8Ssg0JMPW4QUkh+205AgpO5F0tfrIJ5e/sDHgs um+w== X-Gm-Message-State: AODbwcD08/+Wz4s/v4F7vjpDne0Hkbdq6f+6udvzaQ6sHIAl3iUqGjd8 NEbIFvnA626wCyg4g5Crycx/J0xrMw== X-Received: by 10.31.172.86 with SMTP id v83mr2124309vke.26.1494891059673; Mon, 15 May 2017 16:30:59 -0700 (PDT) MIME-Version: 1.0 Sender: gmaxwell@gmail.com Received: by 10.103.20.66 with HTTP; Mon, 15 May 2017 16:30:59 -0700 (PDT) From: Gregory Maxwell Date: Mon, 15 May 2017 23:30:59 +0000 X-Google-Sender-Auth: q_f2H1MWsFby2hxyEBCU_hDLRf0 Message-ID: To: Bitcoin Dev Content-Type: multipart/alternative; boundary="001a1143447a97023c054f9872b8" X-Spam-Status: No, score=-1.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, FREEMAIL_FROM, HTML_MESSAGE, RCVD_IN_DNSWL_NONE, RCVD_IN_SORBS_SPAM autolearn=no version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org X-Mailman-Approved-At: Mon, 15 May 2017 23:36:58 +0000 Subject: [bitcoin-dev] Validationless mining without transactions X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Bitcoin Protocol Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 May 2017 23:31:08 -0000 --001a1143447a97023c054f9872b8 Content-Type: text/plain; charset="UTF-8" Today someone showed up on IRC suggesting a scheme for to improve the ability of miners to mine without validation while including transactions by shipping around an approximate sketch of the txins that were used by a block. I pointed out that what sounded like the exact same scheme had been previously proposed by Anthony Towns over a year ago, that it turned out that it didn't need any consensus changes, but also wasn't very attractive because the actual transmission of the block (at least with FBRP or Fibre) didn't really take any longer... And, of course, mining without validating does a real number on SPV security assumptions. But then realized the the conversation between Anthony and I was offlist. So-- for posterity... I think the most interesting thing about this thread is that it gives a concrete proof that a restriction on collecting transaction fees does not discourage validationless mining; nor to other proposed consensus changes make it any easier to include transactions while mining without validation. Forwarded conversation Subject: Blockchain verification flag (BIP draft) ------------------------ From: Anthony Towns Date: Mon, Feb 29, 2016 at 2:13 AM To: Gregory Maxwell On Fri, Dec 04, 2015 at 08:26:22AM +0000, Gregory Maxwell via bitcoin-dev wrote: > A significant fraction of hashrate currently mines blocks without > verifying them for a span of time after a new block shows up on the > network for economically rational reasons. Two thoughts related to this. Are they obvious or daft? a) Would it make sense to require some demonstration that you've validated prior blocks? eg, you could demonstrate you've done part of the work to at least verify signatures from the previous block by including the sha256 of the concatenation of all the sighash values in the coinbase transaction -- if you'd already done the sig checking, calculating that as you went would be pretty cheap, I think. Then make the rule be that if you set the "validated" bit without including the demonstration of validation, your block is invalid. I guess this is more or less what Peter Todd proposed in: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/ 2015-December/012105.html b) It occurred to me while emailing with Matt Corallo, that it's probably possible to make it easy to generate actually useful blocks while doing validationless mining, rather than only creating empty blocks. When creating a block, you: - calculate a fixed size (7168 bytes?) bloom filter of the prevouts that the block is spending - include the sha256 of the final bloom filter as the last output in the coinbase - enforce the inclusion of that sha256 by soft-fork - as part of fast relaying, transmit: - 80 byte block header - 7168 byte bloom filter - < 416 (?) byte merkle path to the coinbase - 64 byte sha256 midstate for coinbase up to start of the final transaction - < 128 byte tail of coinbase including bloom commitment (total of 7856 bytes, so less than 8kB) I think that would be enough to verify that the proof-of-work is committing to the bloom filter, and the bloom filter will then let you throw out any transactions that could have been included in a block built on block n-1, but can't be included in block n+1 -- whether they're included in the new block, or would be double spends. So given that information you can safely build a new block that's actually full of transactions on top of the new block, even prior to downloading it in full, let alone validating it. I've run that algorithm over the last couple of weeks' worth of transactions (to see how many of the next block's transaction would have been thrown away using that approach) and it appeared to work fine -- it throws away maybe a dozen transactions per block compared to accurate validation, but that's only about a couple of kB out of a 1MB block, so something like 0.2%. (I'm seeing ~4500 prevouts per block roughly, so that's the error rate you'd expect; doubling for 2MB's worth of txes with segwit predicts 3.5%, doubling again would presumably result in 14% of transactions being falsely identified as double spends prior to the block actually validating) I haven't checked the math in detail, but I think that could reasonably give an immediate 20% increase in effective blocksize, given the number of empty blocks that get mined... (There were only ~1571MB of transactions in the last 2016 blocks, so bumping the average from 780kB per block to 940kB would be a 20% increase; which would bring the 1.7x segwit increase up to 2x too...) Also, as far as I can see, you could probably even just have bitcoin core transmit that 8kB of data around as part of propogating headers first. Once you've got the new header and bloom filter, the only extra bit should be passing both those into getblocktemplate to update the previousblockhash and transaction selection. Both those together and it seems like you could be mining on top of the latest block seconds after it was found, just by naively running a bitcoin node? I saw the "Bitcoin Classic" roadmap includes: "Implement "headers-first" mining. As soon as a valid 80-byte block header that extends the most-work chain is received, relay the header (via a new p2p network message) and allow mining an empty block on top of it, for up to 20 seconds." which seems like the same idea done worse... Any thoughts? Pointers to the bitcointalk thread where this was proposed two years ago? :) Cheers, aj ---------- From: Gregory Maxwell Date: Mon, Feb 29, 2016 at 3:20 AM To: Anthony Towns On Mon, Feb 29, 2016 at 2:13 AM, Anthony Towns wrote: > Would it make sense to require some demonstration that you've validated > prior blocks? eg, you could demonstrate you've done part of the work That information is easily shared/delegated... so it just creates another centralized information source, and another source of unfairness producing latency in the mining process. Without actually preventing parties from mining. Doubly so in the context of how validationless mining is actually done; the miners pull from other miner's stratum servers; so they'll just see the commitments there. So I don't see there being too much value there. > if you set the "validated" bit without including the demonstration of > validation, your block is invalid. Pretty good incentive to not adopt the scheme, perhaps? Moreover, this creates another way for a block to be invalid which has no compact fraud proof. :( > It occurred to me while emailing with Matt Corallo, that it's probably > possible to make it easy to generate actually useful blocks while doing > validationless mining, rather than only creating empty blocks. I agree but: I'm basically tired of repeating to people that there is no need for a validationless block to be empty. So Yes, I agree with you on that fact; it's possible for miners to do this already, with no protocol changes (yes, it requires trusting each other but inherently validationless mining already requires that). Miners only don't bother right now because the funds left behind are insubstantial. Its absolutely untrue that an empty block is not useful. Every block, empty or not, mined against the best tip you know contributes to the resolution of consensus and collapsing the network onto a single state. Every block that was mined only after validating a block amplifies security; by helping leave behind an invalid chain faster. A block doesn't need to contain transactions to do these things. > - 7168 byte bloom filter FWIW, thats significantly larger than the amount of data typically needed to send the whole block using the fast block relay protocol. Your estimates are assuming the empty blocks come purely from transmission and verification, but because most verification is cached and transmission compressed-- they don't. There are numerous latency sources through the whole stack, some constant some size-proportional... the mining without validation achieves its gains not from skipping validation (at least not most of the time); but mostly from short cutting a deep stack with many latency sources; including ones that have nothing to do with bitcoin core or the Bitcoin protocol. High hardware latency also amplifies short periods of empty block mining to longer periods. Perhaps most importantly, VFM mining avoids needing to identify and characterize these other delay sources, by short cutting right at the end no one needs to even figure out that their pool server is performing a DNS request before every time it contacts their bitcoind RPC or whatnot. > "Implement "headers-first" mining. As soon as a valid 80-byte block This BIP draft resulted in me relieving some pretty vicious attacks from that community... funny. > Any thoughts? Pointers to the bitcointalk thread where this was proposed > two years ago? :) Relevant to your interests: https://github.com/bitcoin/bitcoin/pull/1586 Lots of discussion on IRC. ---------- From: Anthony Towns Date: Wed, Mar 2, 2016 at 9:55 PM To: Gregory Maxwell On Mon, Feb 29, 2016 at 03:20:01AM +0000, Gregory Maxwell wrote: > On Mon, Feb 29, 2016 at 2:13 AM, Anthony Towns wrote: > > Would it make sense to require some demonstration that you've validated > > prior blocks? eg, you could demonstrate you've done part of the work > That information is easily shared/delegated... Yeah, I thought about that. It's a tradeoff -- you definitely want the validation to be easily "shared" in the sense that you want one validation run to suffice for billions of mining attempts; and you probably want it to be easy to compute when you receive a block, so you don't have to revalidate the previous one to validate the new one... But you don't want it to be so easily shared that one person on the planet calculates it and everyone else just leeches from them. > so it just creates > another centralized information source, and another source of > unfairness producing latency in the mining process. Without actually > preventing parties from mining. Doubly so in the context of how > validationless mining is actually done; the miners pull from other > miner's stratum servers; so they'll just see the commitments there. I think you could make it hostile to accidental sharing by having it be: ; sha256( sha256( current block's first +1 coinbase outputs ; previous block's nonce ) sha256( previous block's sighash values ) ) If you skipped the internal sha256's (or just moved the nonce into the final sha256), you'd be half-way forced to revalidate the previous block every time you found a new block, which might be worthwhile. > > if you set the "validated" bit without including the demonstration of > > validation, your block is invalid. > Pretty good incentive to not adopt the scheme, perhaps? Well, my theory was once you have validated the block, then the demonstration is trivially easy to provide. I was thinking that you could add a positive incentive by making validated blocks count for something like 1.6x the chainwork for choosing which chain to build on; so if you have a chain with 3 unvalidated blocks in a row, then a chain with 2 validated blocks in a row instead would be preferred for building your next block. > Moreover, this creates another way for a block to be invalid which has > no compact fraud proof. :( Hmmm. That's true. Is it true by definition though? If you're proving you've validated 100% of a block, then is it even conceptually possible to check that proof with less work than validating 100% of a block? Sounds kind of SNARK-ish. Oh, don't SNARKs (theoretically) give you a compact fraud proof, provided the block size and sigops are bounded? The "secret" input is the block data, public input is the block hash and the supposed validation proof hash, program returns true if the block hash matches the block data, and the calculated validation hash doesn't match the supposed validation hash. Shudder to think how long generating the proof would take though, or how hard it'd be to generate the circuit in the first place... > > It occurred to me while emailing with Matt Corallo, that it's probably > > possible to make it easy to generate actually useful blocks while doing > > validationless mining, rather than only creating empty blocks. > I agree but: > I'm basically tired of repeating to people that there is no need for a > validationless block to be empty. So Yes, I agree with you on that > fact; it's possible for miners to do this already, with no protocol > changes (yes, it requires trusting each other but inherently > validationless mining already requires that). If you're only mining an empty block, the only way someone else can cause you to waste your time is by wasting their own time doing PoW on an invalid block. If you're mining a block with transactions in it, and they can mine a valid block, but trick you into mining something that double spends, then they can make you waste your time without wasting their own, which seems like a much worse attack to me. The advantage of the consensus enforced bloom filter is you don't have to trust anything more than that economic incentive. However if you just sent an unverifiable bloom filter, it'd be trivial to trick you into mining an invalid block. (If you already have the 1MB of block data, then extracting the prevouts for use as a blacklist would probably be plenty fast though) (Of course, maybe 90% of current hashpower does trust each other anyway, in which case requiring trust isn't a burden, but that's not very decentralised...) (Paragraphs deleted. My maths is probably wrong, but I think it is actually economically rational to mine invalid blocks as chaff to distract validationless miners? The numbers I get are something like "if 40% of the network is doing validationless mining for 20 seconds out of every 10 minutes, then it's profitable to devote about 2% of your hashpower to mining invalid blocks". Probably some pretty dodgy assumptions though, so I'm not including any algebra. But having actual invalid blocks with real proof of work appear in the wild seems like it'd be a good way to encourage miners to do validation...) > Miners only don't bother > right now because the funds left behind are insubstantial. Hey, fees are almost 1% of the block payout these days -- that's within an order of magnitude of a rounding error! > Its absolutely untrue that an empty block is not useful. Yeah, I deleted "useless" for that reason then put it back in anyway... > > - 7168 byte bloom filter > FWIW, thats significantly larger than the amount of data typically > needed to send the whole block using the fast block relay protocol. Really? Hmm, if you have 2-byte indexes into the most likely to be mined 60k transactions, by 2000 transactions per block is about 4000 bytes. So I guess that makes sense. And weak blocks would make that generalisable and only add maybe a 32B index to include on the wire, presumably. It'd only take a dozen missed transactions to be longer though. > Your estimates are assuming the empty blocks come purely from > transmission and verification, but because most verification is cached > and transmission compressed-- they don't. There are numerous latency > sources through the whole stack, some constant some > size-proportional... the mining without validation achieves its gains > not from skipping validation (at least not most of the time); but > mostly from short cutting a deep stack with many latency sources; > including ones that have nothing to do with bitcoin core or the > Bitcoin protocol. Hmm, so my assumption is the "bitcoin core" side of the stack looks something like: block header received by p2p or relay network | V block data received by p2p or relay network | V validation, UTXO set updates | V getblocktemplate (possible tx ordering recalculation) | V block header to do PoW on! | V vary and push to miners over the network | V push to ASICs and the validationless "shortcut" just looks like: block header received by p2p or relay network | V hack hack | V new block header to do PoW on! | V vary and push to miners over the network | V push to ASICs and so making the bitcoin core parts able to provide an unvalidated header to push to miners/ASICs against "instantly" would be a win as far as getting bitcoin proper back into the loop all the time... That would mean removing validation from the critical path, and possibly more optimisation of getblocktemplate to make it effectively instant too. But those seem possible? Having it be: header received by bitcoin core | V new block header to do (unverified) PoW on! | V ... and header received by bitcoin core | V block data received by bitcoin core | V block data validated | V new block header to do (verified) PoW on! | V ... with mining tools being able to just reliably and efficiently leave bitcoin core in the loop seems like it ought to be a win to me... > Perhaps most importantly, VFM mining avoids needing to identify and > characterize these other delay sources, by short cutting right at the > end no one needs to even figure out that their pool server is > performing a DNS request before every time it contacts their bitcoind > RPC or whatnot. At least with longpoll, doing a DNS query before connection shouldn't matter? > > "Implement "headers-first" mining. As soon as a valid 80-byte block > This BIP draft resulted in me relieving some pretty vicious attacks > from that community... funny. I'm guessing you meant "receiving", which makes that a kinda weird freudian slip? :) But yeah, technical consistency isn't something I've seen much of from that area... > > Any thoughts? Pointers to the bitcointalk thread where this was proposed > > two years ago? :) > Relevant to your interests: https://github.com/bitcoin/bitcoin/pull/1586 Tsk, 2 != 4... Hmm, I'm not sure where this leaves my opinion on either of those ideas. Cheers, aj ---------- From: Anthony Towns Date: Sun, Mar 13, 2016 at 3:58 AM To: Gregory Maxwell On Thu, Mar 03, 2016 at 07:55:06AM +1000, Anthony Towns wrote: > > > - 7168 byte bloom filter > > FWIW, thats significantly larger than the amount of data typically > > needed to send the whole block using the fast block relay protocol. > Really? Hmm, if you have 2-byte indexes into the most likely to be mined > 60k transactions, by 2000 transactions per block is about 4000 bytes. So > I guess that makes sense. And weak blocks would make that generalisable > and only add maybe a 32B index to include on the wire, presumably. > It'd only take a dozen missed transactions to be longer though. So I think there's two levels of withholding adversarial miners could do: - block withholding, so they have more time to build on top of their own block, maybe increasing their effective hashrate if they have above average connectivity - transaction withholding, so an entire block can be invalidated after the fact, hitting SPV nodes. if there are SPV miners, this can invalidate their work (potentially profitably, if you've accidently orphaned yourself) You could solve transaction withholding for miners just by saying "a PoW isn't valid unless the merkle tree is valid", that way you can't retroactively invalidate a block, but then you need fast relay before starting to mine, not just the header and some hint as to what transactions might be included, and therefore the bloom filter idea is pointless... Having actually tried the relay network now, it seems like: a) it gets less coding gain than it theoretically could; the day or so's worth of blocks from Lightsword only seemed to be ~8x less data, rather than ~125x-250x, and what I'm seeing seems similar. So still room for improvement? b) using "weak blocks" as a way of paying for adding "non-standard" transactions (large, low fee, actually non-standard, etc) to the mempool seems workable to me; so long as the only reason you're doing weak blocks is so miners can ensure the transactions they're mining are in mempools, and thus that their blocks will relay quickly, the incentives seem properly aligned. (I think you'd want to distinguish txns only relayed because they have a weak block, just to be nice to SPV clients -- weak block txns might only be mined by one miner, while standard, fee paying transactions are being mined by all/most miners) c) it seems like it would be possible to adapt the relay protocol into a p2p environment to me? I'm thinking that you provide a bidirectional mapping for (a subset of) your mempool for each connection you have, so that you can quickly go to/from a 2-byte index to a transaction. If you make it so that whoever was listening gets to decide what transactions are okay, then you'd just need 9 of these maps -- 1 for each of your outgoing connections (ie, 8 total), plus another 1 that covers all your incoming connections, and each map should only really need to use up to about a meg of memory, which seems pretty feasible. Maybe it means up to 8x5MB of your mempool is controlled by other people's policies rather than your own, but that doesn't seem to bad either. d) I'm a bit confused how it compares to IBLT; it seems like IBLT has really strong ordering requirements to work correctly, but if you had that you could compress the fast relay protocol really well, since you could apply the same ordering to your shared mempool, and then just send "next tx, next tx, skip 1 tx, next tx, next tx, skip 3 tx, next tx, here's one you missed, ...", which with compression would probably get you to just a few /bits/ per (previously seen) transaction... [0] [1] e) for p2p relay, maybe it would make sense to have the protocol only allow sending blocks where all the transactions are "previously seen". that way if you get a block where some txes haven't been seen before, you stall that block, and start sending transactions through. if another block comes in in the meantime, that doesn't have any new transactions, you send that block through straight away. that encourages sending weak blocks through first, to ensure your transactions are already in mempools and no one else can sneak in first. Hmm... So that all seems kind of plausible to me; in how many ways am I mistaken? :) Cheers, aj [0] A hard-fork change to have the block merkle tree be ordered by txid, and have the transactions topologically sorted before being validated would be kind-of interesting here -- apart from making sorting obvious, it'd make it easy to prove that a block doesn't contain a transaction. Bit altcoin-y though... [1] Maybe having the shared mempool indexes be sorted rather than FIFO would make the data structures hard; I don't think so, but not sure. ---------- From: Gregory Maxwell Date: Sun, Mar 13, 2016 at 5:06 AM To: Anthony Towns On Sun, Mar 13, 2016 at 3:58 AM, Anthony Towns wrote: > On Thu, Mar 03, 2016 at 07:55:06AM +1000, Anthony Towns wrote: >> > > - 7168 byte bloom filter >> > FWIW, thats significantly larger than the amount of data typically >> > needed to send the whole block using the fast block relay protocol. >> Really? Hmm, if you have 2-byte indexes into the most likely to be mined >> 60k transactions, by 2000 transactions per block is about 4000 bytes. So >> I guess that makes sense. And weak blocks would make that generalisable >> and only add maybe a 32B index to include on the wire, presumably. >> It'd only take a dozen missed transactions to be longer though. > > So I think there's two levels of withholding adversarial miners could > do: > > - block withholding, so they have more time to build on top of their > own block, maybe increasing their effective hashrate if they have > above average connectivity Also called "selfish mining". > - transaction withholding, so an entire block can be invalidated > after the fact, hitting SPV nodes. if there are SPV miners, this can > invalidate their work (potentially profitably, if you've accidently > orphaned yourself) > You could solve transaction withholding for miners just by saying > "a PoW isn't valid unless the merkle tree is valid", that way you > can't retroactively invalidate a block, but then you need fast relay > before starting to mine, not just the header and some hint as to what > transactions might be included, and therefore the bloom filter idea > is pointless... Right, this is how Bitcoin Core works (won't extend a chain it hasn't validated)-- but some miners have shortcutted it to reduce latency. (And not just bypassing validation, but the whole process, e.g. transaction selection; which historically has taken more time than propagation). > Having actually tried the relay network now, it seems like: > > a) it gets less coding gain than it theoretically could; the day or > so's worth of blocks from Lightsword only seemed to be ~8x less data, > rather than ~125x-250x, and what I'm seeing seems similar. So still > room for improvement? It's pretty variable. It depends a lot on consistency between the transactions the server side selects and the client. When spam attacks go on, or miners change their policy compression falls off until the far end twiddles. Go look at the distribution of the results. > c) it seems like it would be possible to adapt the relay protocol into > a p2p environment to me? I'm thinking that you provide a bidirectional > mapping for (a subset of) your mempool for each connection you > have, so that you can quickly go to/from a 2-byte index to a > transaction. If you make it so that whoever was listening gets to > decide what transactions are okay, then you'd just need 9 of these > maps -- 1 for each of your outgoing connections (ie, 8 total), plus > another 1 that covers all your incoming connections, and each map > should only really need to use up to about a meg of memory, which > seems pretty feasible. Maybe it means up to 8x5MB of your mempool > is controlled by other people's policies rather than your own, > but that doesn't seem to bad either. That is a bit kludgy, but yes-- it would work. But the key thing about latency minimization is that you _must_ send a block with no request; because otherwise the RTT for just the request alone will totally dominate the transfer in most cases. And having N peers send you the whole block redundantly ends up hurting your performance (esp because packet losses mean more round trips) even if the compression is very high. All these problems can be avoided; at least in theory. Optimal latency mitigation would be achieved by something like block network coding techniques: https://en.bitcoin.it/wiki/User:Gmaxwell/block_network_coding With these techniques peers could blindly send you data without you requesting it, while every byte they send would usefully contribute to your reconstruction. With extra effort and opportunistic forwarding the entire network could, in theory, receive a block in the time it took the original host to send only one block, while making use of a significant fraction of the network's whole bisection bandwidth. > d) I'm a bit confused how it compares to IBLT; it seems like IBLT has > really strong ordering requirements to work correctly, but if you > had that you could compress the fast relay protocol really well, > since you could apply the same ordering to your shared mempool, and > then just send "next tx, next tx, skip 1 tx, next tx, next tx, skip > 3 tx, next tx, here's one you missed, ...", which with compression > would probably get you to just a few /bits/ per (previously seen) > transaction... [0] [1] Latency of block relay easily ends up CPU bound; even when not doing anything too smart (this is why Matt's relay protocol stuff has AVX sha2 code in it). Prior IBLT implementation attempts have performance so low that their decode time ends up dwarfing transmission time, and plain uncoded blocks are faster for common host/bandwidth configurations. The ordering requirements stuff is not that relevant in my view; you likely believe this because Gavin rat-holed himself on it trying to spec out ordering requirements for miners... The reality of it is that a uniform permutation of, say, 4000 transactions can be stored in log2(4000!)/8 bytes, or about 5.2kbytes (and this is easily achieved just by using range coding to optimally pack integers in the range [0..n_unpicked_txn) to pick transactions out of a lexagraphically sorted list) ... and this is without any prediction at all-- randomly ordered txn in the block would work just as well. [E.g. using the uint coder from the daala video codec project can code these values with about 1% overhead, and runs at about 12MB/sec doing so on my slow laptop] Recently some folks have been working privately on a block network coding implementation... earlier attempts (even before IBLT became trendy) were thwarted by the same thing that thwarts IBLT: the decoding was so slow it dominated the latency. We've found some faster coding schemes though... so it looks like it might be practical now. I could send you more info if you read the block network coding page and are interested in helping. Both IBLT and BNC would both be more useful in the weakblocks model because there the decode speed isn't latency critical-- so if it needs 100ms of cpu time to decode an efficiently encoded block, that is no big deal. > e) for p2p relay, maybe it would make sense to have the protocol only > allow sending blocks where all the transactions are "previously > seen". that way if you get a block where some txes haven't been > seen before, you stall that block, and start sending transactions > through. if another block comes in in the meantime, that doesn't > have any new transactions, you send that block through straight away. > that encourages sending weak blocks through first, to ensure your > transactions are already in mempools and no one else can sneak > in first. Yes, it's perfectly reasonable to do that for bandwidth minimization-- though it doesn't minimize latency. "Seen" is complex, you have no guarantee a peer will accept any transaction you've sent it, or even that it will retain any it sent you. So multiple round trips are required to resolve missing transactions. We haven't bothered implementing this historically because the bandwidth reduction is small overall, and it's not the right strategy for reducing latency-- the vast majority of bandwidth is eaten by relay. Right now maybe 15% is used by blocks... so at most you'd get a 15% improvement here. I did some fairly informal measurements and posted about it: https://bitcointalk.org/index.php?topic=1377345.0 I also point out there that the existing blocksonly mode achieves bandwidth optimal transport already (ignoring things like transaction format compression)... just so long as you don't care about learning about unconfirmed transactions. :) > [0] A hard-fork change to have the block merkle tree be ordered by txid, > and have the transactions topologically sorted before being validated > would be kind-of interesting here -- apart from making sorting > obvious, it'd make it easy to prove that a block doesn't contain a > transaction. Bit altcoin-y though... If you sort by data (or ID) without requiring the verifier to topologically sort then an efficient permutation coder would only spend bits on places where dependencies push things out of the expected order... which is fairly rare. Seems like a reasonable cost for avoiding the hardfork, no? The receiver topo sort requirement would also require more memory in a block verifier; and would be more complex to fraud proof, I think. Engineering wise it's not quite so simple. It's helpful for miners to have blocks sorted by feerate so that later stages of the mining process can drop the least profitable transactions simply by truncating the block. > [1] Maybe having the shared mempool indexes be sorted rather than FIFO > would make the data structures hard; I don't think so, but not sure. I tried to get Matt to do that for his stuff previously; pointing out the sorted indexes would be easier to efficiently code. His counterargument was that for 2000 txn, the two bytes indexes take 4kb, which is pretty insignificant... and that his time would be better spent trying to get the hit-rate up. I found that hard to argue with. :) ---------- From: Anthony Towns Date: Mon, Mar 14, 2016 at 3:08 AM To: Gregory Maxwell On Sun, Mar 13, 2016 at 05:06:25AM +0000, Gregory Maxwell wrote: > > - block withholding, so they have more time to build on top of their > > own block, maybe increasing their effective hashrate if they have > > above average connectivity > Also called "selfish mining". Yup. > > c) it seems like it would be possible to adapt the relay protocol into > > a p2p environment to me? [...] > That is a bit kludgy, but yes-- it would work. > But the key thing about latency minimization is that you _must_ send a > block with no request; because otherwise the RTT for just the request > alone will totally dominate the transfer in most cases. And having N > peers send you the whole block redundantly ends up hurting your > performance (esp because packet losses mean more round trips) even if > the compression is very high. If the block can be encoded fully, then it's up to maybe 10kB per block max (at 1MB blocksize); I don't think multiple transmissions matter much in that case? Hmm, maybe it does... > All these problems can be avoided; at least in theory. Optimal latency > mitigation would be achieved by something like block network coding > techniques: > https://en.bitcoin.it/wiki/User:Gmaxwell/block_network_coding Ugh, patents. Interesting that the patents on turbo codes have expired, last time I looked they hadn't. > With these techniques peers could blindly send you data without you > requesting it, while every byte they send would usefully contribute to > your reconstruction. Yeah, that makes sense I think. Pretty complicated though. The "someone sent corrupt data" seems a little bit problematic to deal with too, especially in the "optimistically forward stuff before you can validate it" phase. At least if you're using error correcting codes anyway, that's probably a self-solving problem. What's with the switch from 32 bit faux ids in the original section to 63 bits in the reimagination? I guess you use most of that for the additional encoded length though... Keying with the previous block's hash seems kind-of painful, doesn't it? Once you receive the ids, you want to lookup the actual transactions from your mempool, but since you can't decrypt anything useful with only the first 50/60 bits of cyphertext, the only way to do that is to have already cycled through all the transactions in your mempool and pre-calculated what their network coded id for that block is, and you have to do that everytime you receive a block (including orphans, I guess). It'd make reorgs more expensive too, because you'd have to reindex all the mempool then as well? Maybe if you're only doing that predictively it's not so bad? The 5MB-20MB of txes with highest fees get coded up, and you just download any other transactions in full? If you're downloading large coinbase txes regularly anyway, that's probably no big deal. > Latency of block relay easily ends up CPU bound; even when not doing > anything too smart (this is why Matt's relay protocol stuff has AVX > sha2 code in it). Yeah, that seemed a little odd to me; there shouldn't be that much hashing to validate a block (1MB of transactions, then maybe 128kB to get to sha256d, then another 2*128kB for the rest of the merkle tree?). Matt's code seems like it's doing a linear search through the tx index to find each tx though, which probably doesn't help. > Prior IBLT implementation attempts have performance > so low that their decode time ends up dwarfing transmission time, and > plain uncoded blocks are faster for common host/bandwidth > configurations. Heh. > The ordering requirements stuff is not that relevant in my view; you > likely believe this because Gavin rat-holed himself on it trying to > spec out ordering requirements for miners... The reality of it is > that a uniform permutation of, say, 4000 transactions can be stored in > log2(4000!)/8 bytes, or about 5.2kbytes Right, but 5.2 kB is a lot of overhead; at least compared to the cases where Matt's stuff already works well :) > Recently some folks have been working privately on a block network > coding implementation... earlier attempts (even before IBLT became > trendy) were thwarted by the same thing that thwarts IBLT: the > decoding was so slow it dominated the latency. We've found some faster > coding schemes though... so it looks like it might be practical now. I > could send you more info if you read the block network coding page and > are interested in helping. Sure. (Though, fair warning, I've already failed a few times at doing anything useful with erasure coding...) > > e) for p2p relay, maybe it would make sense to have the protocol only > > allow sending blocks where all the transactions are "previously > > seen". that way if you get a block where some txes haven't been > > seen before, you stall that block, and start sending transactions > > through. if another block comes in in the meantime, that doesn't > > have any new transactions, you send that block through straight away. > > that encourages sending weak blocks through first, to ensure your > > transactions are already in mempools and no one else can sneak > > in first. > Yes, it's perfectly reasonable to do that for bandwidth minimization-- > though it doesn't minimize latency. "Seen" is complex, you have no > guarantee a peer will accept any transaction you've sent it, or even > that it will retain any it sent you. So multiple round trips are > required to resolve missing transactions. The "p2p relay" in my head has "seen" meaning "the 5MB of transactions the listening peer thinks is most likely to be mined", odds on both peers have actually seen something like 145MB of additional transactions too. You don't do round trips; you just start sending the "unseen" transactions automatically (by id or in full?), then you send the compressed block. The only round trip is if you sent the id, but they actually needed the full tx. In my head, you get good latency if you do weak blocks beforehand, and somewhat poorer latency if you don't. Even in my head, I'm not sure that's actually feasible, though: I'm not sure weak blocks for coinbase transactions really work, and comparatively high latency on 5% of blocks that didn't get any weak blocks beforehand isn't very attractive... > We haven't bothered implementing this historically because the > bandwidth reduction is small overall, and it's not the right strategy > for reducing latency-- the vast majority of bandwidth is eaten by > relay. Right now maybe 15% is used by blocks... so at most you'd get a > 15% improvement here. Yeah, I'm assuming a non-trivial increase in bandwidth usage compared to current relay. Compared to relaying spam transactions (that don't get mined prior to expiry), not sure it's significant though. > > [0] A hard-fork change to have the block merkle tree be ordered by txid, > > and have the transactions topologically sorted before being validated > > would be kind-of interesting here -- apart from making sorting > > obvious, it'd make it easy to prove that a block doesn't contain a > > transaction. Bit altcoin-y though... > If you sort by data (or ID) without requiring the verifier to > topologically sort then an efficient permutation coder would only > spend bits on places where dependencies push things out of the > expected order... which is fairly rare. Really? I was seeing a lot of transaction chains in the couple of blocks I looked at. Also, you wouldn't get short proofs that a transaction isn't present in a block that way either afaics. > Seems like a reasonable cost for avoiding the hardfork, no? The > receiver topo sort requirement would also require more memory in a > block verifier; and would be more complex to fraud proof, I think. Hmm, I think it'd be easy to fraud proof -- just show adjacent merkle paths where the results are in the wrong order. Maybe the same's true with the id-order-but-toposorted too -- just show adjacent merkle paths where the results are in the wrong order, and the later doesn't depend on the former. I'm not sure that gives a unique sort though (but maybe that doesn't actually matter). > Engineering wise it's not quite so simple. It's helpful for miners to > have blocks sorted by feerate so that later stages of the mining > process can drop the least profitable transactions simply by > truncating the block. Yeah; not having ordering requirements seems far more practical. > > [1] Maybe having the shared mempool indexes be sorted rather than FIFO > > would make the data structures hard; I don't think so, but not sure. > I tried to get Matt to do that for his stuff previously; pointing out > the sorted indexes would be easier to efficiently code. His > counterargument was that for 2000 txn, the two bytes indexes take 4kb, > which is pretty insignificant... and that his time would be better > spent trying to get the hit-rate up. I found that hard to argue with. > :) Yeah. Having the bitcoin mempool and fee info (and heck, priority info) more readily available when seeing new transactions and choosing what to include seems like it'd be helpful here. Seems relatively painful to do that outside of bitcoin though. Cheers, aj ---------- From: Gregory Maxwell Date: Mon, May 15, 2017 at 8:03 PM To: Anthony Towns I ran into someone proposing the same thing as you. Can I share this discussion with them? (with the public?) ---------- From: Anthony Towns Date: Mon, May 15, 2017 at 11:00 PM To: Gregory Maxwell Yes, go ahead on both counts. -- Sent from my phone. --001a1143447a97023c054f9872b8 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable

Today someone showed up on IRC suggest= ing a scheme for to improve the ability of miners to mine without validatio= n while including transactions by shipping around an approximate sketch of = the txins that were used by a block.

I pointed= out that what sounded like the exact same scheme had been previously propo= sed by Anthony Towns over a year ago,=C2=A0 that it turned out that it didn= 't need any consensus changes, but also wasn't very attractive beca= use the actual transmission of the block (at least with FBRP or Fibre) didn= 't really take any longer...=C2=A0 And, of course, mining without valid= ating does a real number on SPV security assumptions.

But then realized the the conversation between Anthony and I was of= flist.=C2=A0 So-- for posterity...

I think the mos= t interesting thing about this thread is that it gives a concrete proof tha= t a restriction on collecting transaction fees does not discourage validati= onless mining; nor to other proposed consensus changes make it any easier t= o include transactions while mining without validation.


Forwarded conversation
Subject: Blockchain verification flag (BIP draft)
-------------= -----------

From: Anthony Towns= <aj@erisian.com.= au>
Date: Mon, Feb 29, 2016 at 2:13 AM
To: Gregory Maxw= ell <greg@xiph.org>

On Fri, Dec 04, 2015 at 08:26:22AM +0000, Gre= gory Maxwell via bitcoin-dev wrote:
> A significant fraction of hashrate currently mines blocks without
> verifying them for a span of time after a new block shows up on the > network for economically rational reasons.

Two thoughts related to this. Are they obvious or daft?

a)

Would it make sense to require some demonstration that you've validated=
prior blocks? eg, you could demonstrate you've done part of the work to at least verify signatures from the previous block by including the
sha256 of the concatenation of all the sighash values in the coinbase
transaction -- if you'd already done the sig checking, calculating that=
as you went would be pretty cheap, I think. Then make the rule be that
if you set the "validated" bit without including the demonstratio= n of
validation, your block is invalid.

I guess this is more or less what Peter Todd proposed in:

https://lists.linux= foundation.org/pipermail/bitcoin-dev/2015-December/012105.html

b)

It occurred to me while emailing with Matt Corallo, that it's probably<= br> possible to make it easy to generate actually useful blocks while doing
validationless mining, rather than only creating empty blocks.

When creating a block, you:

=C2=A0 =C2=A0- calculate a fixed size (7168 bytes?) bloom filter of the
=C2=A0 =C2=A0 =C2=A0prevouts that the block is spending
=C2=A0 =C2=A0- include the sha256 of the final bloom filter as the last out= put
=C2=A0 =C2=A0 =C2=A0in the coinbase
=C2=A0 =C2=A0- enforce the inclusion of that sha256 by soft-fork
=C2=A0 =C2=A0- as part of fast relaying, transmit:
=C2=A0 =C2=A0 =C2=A0 =C2=A0- 80 byte block header
=C2=A0 =C2=A0 =C2=A0 =C2=A0- 7168 byte bloom filter
=C2=A0 =C2=A0 =C2=A0 =C2=A0- < 416 (?) byte merkle path to the coinbase<= br> =C2=A0 =C2=A0 =C2=A0 =C2=A0- 64 byte sha256 midstate for coinbase up to sta= rt of the
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0final transaction
=C2=A0 =C2=A0 =C2=A0 =C2=A0- < 128 byte tail of coinbase including bloom= commitment
=C2=A0 =C2=A0 =C2=A0(total of 7856 bytes, so less than 8kB)

I think that would be enough to verify that the proof-of-work is
committing to the bloom filter, and the bloom filter will then let
you throw out any transactions that could have been included in a block
built on block n-1, but can't be included in block n+1 -- whether they&= #39;re
included in the new block, or would be double spends. So given that
information you can safely build a new block that's actually full of transactions on top of the new block, even prior to downloading it in
full, let alone validating it.

I've run that algorithm over the last couple of weeks' worth of
transactions (to see how many of the next block's transaction would hav= e
been thrown away using that approach) and it appeared to work fine --
it throws away maybe a dozen transactions per block compared to accurate validation, but that's only about a couple of kB out of a 1MB block, so something like 0.2%.=C2=A0 (I'm seeing ~4500 prevouts per block roug= hly,
so that's the error rate you'd expect; doubling for 2MB's worth= of txes
with segwit predicts 3.5%, doubling again would presumably result in 14% of transactions being falsely identified as double spends prior to the
block actually validating)

I haven't checked the math in detail, but I think that could reasonably=
give an immediate 20% increase in effective blocksize, given the number of<= br> empty blocks that get mined... (There were only ~1571MB of transactions
in the last 2016 blocks, so bumping the average from 780kB per block to
940kB would be a 20% increase; which would bring the 1.7x segwit increase up to 2x too...)

Also, as far as I can see, you could probably even just have bitcoin core transmit that 8kB of data around as part of propogating headers first.
Once you've got the new header and bloom filter, the only extra bit
should be passing both those into getblocktemplate to update the
previousblockhash and transaction selection. Both those together and it
seems like you could be mining on top of the latest block seconds after
it was found, just by naively running a bitcoin node?

I saw the "Bitcoin Classic" roadmap includes:

=C2=A0 "Implement "headers-first" mining. As soon as a valid= 80-byte block
=C2=A0 =C2=A0header that extends the most-work chain is received, relay the= header
=C2=A0 =C2=A0(via a new p2p network message) and allow mining an empty bloc= k on top
=C2=A0 =C2=A0of it, for up to 20 seconds."

which seems like the same idea done worse...

Any thoughts? Pointers to the bitcointalk thread where this was proposed two years ago? :)

Cheers,
aj


----------
From: Gregory Maxwell <
gmaxwell@gmail.com>
Date:= Mon, Feb 29, 2016 at 3:20 AM
To: Anthony Towns <aj@erisian.com.au>


On Mon, Feb 29, 2016 at 2:13 AM, Anthony Towns <aj@erisian.com.au> wrote:
> Would it make sense to require some demonstration that you've vali= dated
> prior blocks? eg, you could demonstrate you've done part of the wo= rk

That information is easily shared/delegated... so it just creates another centralized information source, and another source of
unfairness producing latency in the mining process. Without actually
preventing parties from mining. Doubly so in the context of how
validationless mining is actually done; the miners pull from other
miner's stratum servers; so they'll just see the commitments there.=

So I don't see there being too much value there.

> if you set the "validated" bit without including the demonst= ration of
> validation, your block is invalid.

Pretty good incentive to not adopt the scheme, perhaps?

Moreover, this creates another way for a block to be invalid which has
no compact fraud proof. :(

> It occurred to me while emailing with Matt Corallo, that it's prob= ably
> possible to make it easy to generate actually useful blocks while doin= g
> validationless mining, rather than only creating empty blocks.

I agree but:

I'm basically tired of repeating to people that there is no need for a<= br> validationless block to be empty. So Yes, I agree with you on that
fact; it's possible for miners to do this already, with no protocol
changes (yes, it requires trusting each other but inherently
validationless mining already requires that). Miners only don't bother<= br> right now because the funds left behind are insubstantial.

Its absolutely untrue that an empty block is not useful. Every block,
empty or not, mined against the best tip you know contributes to the
resolution of consensus and collapsing the network onto a single
state. Every block that was mined only after validating a block
amplifies security; by helping leave behind an invalid chain faster. A
block doesn't need to contain transactions to do these things.

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 - 7168 byte bloom filter

FWIW, thats significantly larger than the amount of data typically needed to send the whole block using the fast block relay protocol.

Your estimates are assuming the empty blocks come purely from
transmission and verification, but because most verification is cached
and transmission compressed-- they don't. There are numerous latency sources through the whole stack, some constant some
size-proportional... the mining without validation achieves its gains
not from skipping validation (at least not most of the time); but
mostly from short cutting a deep stack with many latency sources;
including ones that have nothing to do with bitcoin core or the
Bitcoin protocol.

High hardware latency also amplifies short periods of empty block
mining to longer periods.

Perhaps most importantly, VFM mining avoids needing to identify and
characterize these other delay sources, by short cutting right at the
end no one needs to even figure out that their pool server is
performing a DNS request before every time it contacts their bitcoind
RPC or whatnot.

>=C2=A0 =C2=A0"Implement "headers-first" mining. As soon = as a valid 80-byte block

This BIP draft resulted in me relieving some pretty vicious attacks<= br> from that community... funny.

> Any thoughts? Pointers to the bitcointalk thread where this was propos= ed
> two years ago? :)

Relevant to your interests: https://github.com/bi= tcoin/bitcoin/pull/1586

Lots of discussion on IRC.

----------
From: Anthony Towns <aj@erisian.com.au>
Date: Wed= , Mar 2, 2016 at 9:55 PM
To: Gregory Maxwell <gmaxwell@gmail.com>


On Mon, Feb 29, 2016 at 03:20:01AM +0000, Gregory Maxwell wrote: > On Mon, Feb 29, 2016 at 2:13 AM, Anthony Towns <aj@erisian.com.au> wrote:
> > Would it make sense to require some demonstration that you've= validated
> > prior blocks? eg, you could demonstrate you've done part of t= he work
> That information is easily shared/delegated...

Yeah, I thought about that. It's a tradeoff -- you definitely wa= nt the
validation to be easily "shared" in the sense that you want one v= alidation
run to suffice for billions of mining attempts; and you probably want
it to be easy to compute when you receive a block, so you don't have to revalidate the previous one to validate the new one... But you don't=
want it to be so easily shared that one person on the planet calculates
it and everyone else just leeches from them.

> so it just creates
> another centralized information source, and another source of
> unfairness producing latency in the mining process. Without actually > preventing parties from mining. Doubly so in the context of how
> validationless mining is actually done; the miners pull from other
> miner's stratum servers; so they'll just see the commitments t= here.

I think you could make it hostile to accidental sharing by having it= be:

=C2=A0 <n> ;
=C2=A0 sha256(
=C2=A0 =C2=A0 =C2=A0 sha256( current block's first <n>+1 coinbase= outputs ;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0previous block's= nonce )
=C2=A0 =C2=A0 =C2=A0 sha256( previous block's sighash values )
=C2=A0 )

If you skipped the internal sha256's (or just moved the nonce into the<= br> final sha256), you'd be half-way forced to revalidate the previous bloc= k
every time you found a new block, which might be worthwhile.

> > if you set the "validated" bit without including the de= monstration of
> > validation, your block is invalid.
> Pretty good incentive to not adopt the scheme, perhaps?

Well, my theory was once you have validated the block, then the
demonstration is trivially easy to provide.

I was thinking that you could add a positive incentive by making validated<= br> blocks count for something like 1.6x the chainwork for choosing which
chain to build on; so if you have a chain with 3 unvalidated blocks in
a row, then a chain with 2 validated blocks in a row instead would be
preferred for building your next block.

> Moreover, this creates another way for a block to be invalid which has=
> no compact fraud proof. :(

Hmmm. That's true. Is it true by definition though? If you'r= e proving
you've validated 100% of a block, then is it even conceptually possible=
to check that proof with less work than validating 100% of a block?
Sounds kind of SNARK-ish.

Oh, don't SNARKs (theoretically) give you a compact fraud proof, provid= ed
the block size and sigops are bounded? The "secret" input is the = block
data, public input is the block hash and the supposed validation proof
hash, program returns true if the block hash matches the block data,
and the calculated validation hash doesn't match the supposed validatio= n
hash. Shudder to think how long generating the proof would take though,
or how hard it'd be to generate the circuit in the first place...

> > It occurred to me while emailing with Matt Corallo, that it's= probably
> > possible to make it easy to generate actually useful blocks while= doing
> > validationless mining, rather than only creating empty blocks. > I agree but:
> I'm basically tired of repeating to people that there is no need f= or a
> validationless block to be empty. So Yes, I agree with you on that
> fact; it's possible for miners to do this already, with no protoco= l
> changes (yes, it requires trusting each other but inherently
> validationless mining already requires that).

If you're only mining an empty block, the only way someone else = can
cause you to waste your time is by wasting their own time doing PoW on
an invalid block. If you're mining a block with transactions in it, and=
they can mine a valid block, but trick you into mining something that
double spends, then they can make you waste your time without wasting
their own, which seems like a much worse attack to me.

The advantage of the consensus enforced bloom filter is you don't have<= br> to trust anything more than that economic incentive. However if you just sent an unverifiable bloom filter, it'd be trivial to trick you into mining an invalid block.

(If you already have the 1MB of block data, then extracting the prevouts for use as a blacklist would probably be plenty fast though)

(Of course, maybe 90% of current hashpower does trust each other
anyway, in which case requiring trust isn't a burden, but that's no= t
very decentralised...)

(Paragraphs deleted. My maths is probably wrong, but I think it is
actually economically rational to mine invalid blocks as chaff to distract<= br> validationless miners? The numbers I get are something like "if 40% of=
the network is doing validationless mining for 20 seconds out of every
10 minutes, then it's profitable to devote about 2% of your hashpower t= o
mining invalid blocks". Probably some pretty dodgy assumptions though,=
so I'm not including any algebra. But having actual invalid blocks with=
real proof of work appear in the wild seems like it'd be a good way to<= br> encourage miners to do validation...)

> Miners only don't bother
> right now because the funds left behind are insubstantial.

Hey, fees are almost 1% of the block payout these days -- that's= within
an order of magnitude of a rounding error!

> Its absolutely untrue that an empty block is not useful.

Yeah, I deleted "useless" for that reason then put it back= in anyway...

> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 - 7168 byte bloom filter
> FWIW, thats significantly larger than the amount of data typically
> needed to send the whole block using the fast block relay protocol.
Really? Hmm, if you have 2-byte indexes into the most likely to be m= ined
60k transactions, by 2000 transactions per block is about 4000 bytes. So I guess that makes sense. And weak blocks would make that generalisable
and only add maybe a 32B index to include on the wire, presumably.

It'd only take a dozen missed transactions to be longer though.

> Your estimates are assuming the empty blocks come purely from
> transmission and verification, but because most verification is cached=
> and transmission compressed-- they don't. There are numerous laten= cy
> sources through the whole stack, some constant some
> size-proportional... the mining without validation achieves its gains<= br> > not from skipping validation (at least not most of the time); but
> mostly from short cutting a deep stack with many latency sources;
> including ones that have nothing to do with bitcoin core or the
> Bitcoin protocol.

Hmm, so my assumption is the "bitcoin core" side of the st= ack looks
something like:

=C2=A0 =C2=A0block header received by p2p or relay network
=C2=A0 =C2=A0 =C2=A0|
=C2=A0 =C2=A0 =C2=A0V
=C2=A0 =C2=A0block data received by p2p or relay network
=C2=A0 =C2=A0 =C2=A0|
=C2=A0 =C2=A0 =C2=A0V
=C2=A0 =C2=A0validation, UTXO set updates
=C2=A0 =C2=A0 =C2=A0|
=C2=A0 =C2=A0 =C2=A0V
=C2=A0 =C2=A0getblocktemplate (possible tx ordering recalculation)
=C2=A0 =C2=A0 =C2=A0|
=C2=A0 =C2=A0 =C2=A0V
=C2=A0 =C2=A0block header to do PoW on!
=C2=A0 =C2=A0 =C2=A0|
=C2=A0 =C2=A0 =C2=A0V
=C2=A0 =C2=A0vary and push to miners over the network
=C2=A0 =C2=A0 =C2=A0|
=C2=A0 =C2=A0 =C2=A0V
=C2=A0 =C2=A0push to ASICs

and the validationless "shortcut" just looks like:

=C2=A0 =C2=A0block header received by p2p or relay network
=C2=A0 =C2=A0 =C2=A0|
=C2=A0 =C2=A0 =C2=A0V
=C2=A0 =C2=A0hack hack
=C2=A0 =C2=A0 =C2=A0|
=C2=A0 =C2=A0 =C2=A0V
=C2=A0 =C2=A0new block header to do PoW on!
=C2=A0 =C2=A0 =C2=A0|
=C2=A0 =C2=A0 =C2=A0V
=C2=A0 =C2=A0vary and push to miners over the network
=C2=A0 =C2=A0 =C2=A0|
=C2=A0 =C2=A0 =C2=A0V
=C2=A0 =C2=A0push to ASICs

and so making the bitcoin core parts able to provide an unvalidated
header to push to miners/ASICs against "instantly" would be a win= as
far as getting bitcoin proper back into the loop all the time... That
would mean removing validation from the critical path, and possibly more optimisation of getblocktemplate to make it effectively instant too. But those seem possible?

Having it be:

=C2=A0 header received by bitcoin core
=C2=A0 =C2=A0 |
=C2=A0 =C2=A0 V
=C2=A0 new block header to do (unverified) PoW on!
=C2=A0 =C2=A0 |
=C2=A0 =C2=A0 V
=C2=A0 ...

and

=C2=A0 header received by bitcoin core
=C2=A0 =C2=A0 |
=C2=A0 =C2=A0 V
=C2=A0 block data received by bitcoin core
=C2=A0 =C2=A0 |
=C2=A0 =C2=A0 V
=C2=A0 block data validated
=C2=A0 =C2=A0 |
=C2=A0 =C2=A0 V
=C2=A0 new block header to do (verified) PoW on!
=C2=A0 =C2=A0 |
=C2=A0 =C2=A0 V
=C2=A0 ...

with mining tools being able to just reliably and efficiently leave
bitcoin core in the loop seems like it ought to be a win to me...

> Perhaps most importantly, VFM mining avoids needing to identify and > characterize these other delay sources, by short cutting right at the<= br> > end no one needs to even figure out that their pool server is
> performing a DNS request before every time it contacts their bitcoind<= br> > RPC or whatnot.

At least with longpoll, doing a DNS query before connection shouldn&= #39;t
matter?

> >=C2=A0 =C2=A0"Implement "headers-first" mining. As = soon as a valid 80-byte block
> This BIP draft resulted in me relieving some pretty vicious attacks > from that community... funny.

I'm guessing you meant "receiving", which makes that a= kinda weird
freudian slip? :) But yeah, technical consistency isn't something I'= ;ve
seen much of from that area...

> > Any thoughts? Pointers to the bitcointalk thread where this was p= roposed
> > two years ago? :)
> Relevant to your interests: https://github.com/bitc= oin/bitcoin/pull/1586

Tsk, 2 !=3D 4...

Hmm, I'm not sure where this leaves my opinion on either of those ideas= .

Cheers,
aj


----------
From: Anthony Towns <aj@erisian.com.au>
Date: Sun= , Mar 13, 2016 at 3:58 AM
To: Gregory Maxwell <gmaxwell@gmail.com>


On Thu, Mar 03, 2016 at 07:55:06AM +1000, Anthony Towns wrote: > > >=C2=A0 =C2=A0 =C2=A0 =C2=A0 - 7168 byte bloom filter
> > FWIW, thats significantly larger than the amount of data typicall= y
> > needed to send the whole block using the fast block relay protoco= l.
> Really? Hmm, if you have 2-byte indexes into the most likely to be min= ed
> 60k transactions, by 2000 transactions per block is about 4000 bytes. = So
> I guess that makes sense. And weak blocks would make that generalisabl= e
> and only add maybe a 32B index to include on the wire, presumably.
> It'd only take a dozen missed transactions to be longer though.
So I think there's two levels of withholding adversarial miners = could
do:

=C2=A0- block withholding, so they have more time to build on top of their<= br> =C2=A0 =C2=A0own block, maybe increasing their effective hashrate if they h= ave
=C2=A0 =C2=A0above average connectivity

=C2=A0- transaction withholding, so an entire block can be invalidated
=C2=A0 =C2=A0after the fact, hitting SPV nodes. if there are SPV miners, th= is can
=C2=A0 =C2=A0invalidate their work (potentially profitably, if you've a= ccidently
=C2=A0 =C2=A0orphaned yourself)

You could solve transaction withholding for miners just by saying
"a PoW isn't valid unless the merkle tree is valid", that way= you
can't retroactively invalidate a block, but then you need fast relay before starting to mine, not just the header and some hint as to what
transactions might be included, and therefore the bloom filter idea
is pointless...


Having actually tried the relay network now, it seems like:

=C2=A0a) it gets less coding gain than it theoretically could; the day or =C2=A0 =C2=A0 so's worth of blocks from Lightsword only seemed to be ~8= x less data,
=C2=A0 =C2=A0 rather than ~125x-250x, and what I'm seeing seems similar= . So still
=C2=A0 =C2=A0 room for improvement?

=C2=A0b) using "weak blocks" as a way of paying for adding "= non-standard"
=C2=A0 =C2=A0 transactions (large, low fee, actually non-standard, etc) to = the
=C2=A0 =C2=A0 mempool seems workable to me; so long as the only reason you&= #39;re doing
=C2=A0 =C2=A0 weak blocks is so miners can ensure the transactions they'= ;re mining
=C2=A0 =C2=A0 are in mempools, and thus that their blocks will relay quickl= y, the
=C2=A0 =C2=A0 incentives seem properly aligned. (I think you'd want to = distinguish
=C2=A0 =C2=A0 txns only relayed because they have a weak block, just to be = nice to
=C2=A0 =C2=A0 SPV clients -- weak block txns might only be mined by one min= er, while
=C2=A0 =C2=A0 standard, fee paying transactions are being mined by all/most= miners)

=C2=A0c) it seems like it would be possible to adapt the relay protocol int= o
=C2=A0 =C2=A0 a p2p environment to me? I'm thinking that you provide a = bidirectional
=C2=A0 =C2=A0 mapping for (a subset of) your mempool for each connection yo= u
=C2=A0 =C2=A0 have, so that you can quickly go to/from a 2-byte index to a<= br> =C2=A0 =C2=A0 transaction. If you make it so that whoever was listening get= s to
=C2=A0 =C2=A0 decide what transactions are okay, then you'd just need 9= of these
=C2=A0 =C2=A0 maps -- 1 for each of your outgoing connections (ie, 8 total)= , plus
=C2=A0 =C2=A0 another 1 that covers all your incoming connections, and each= map
=C2=A0 =C2=A0 should only really need to use up to about a meg of memory, w= hich
=C2=A0 =C2=A0 seems pretty feasible.=C2=A0 Maybe it means up to 8x5MB of yo= ur mempool
=C2=A0 =C2=A0 is controlled by other people's policies rather than your= own,
=C2=A0 =C2=A0 but that doesn't seem to bad either.

=C2=A0d) I'm a bit confused how it compares to IBLT; it seems like IBLT= has
=C2=A0 =C2=A0 really strong ordering requirements to work correctly, but if= you
=C2=A0 =C2=A0 had that you could compress the fast relay protocol really we= ll,
=C2=A0 =C2=A0 since you could apply the same ordering to your shared mempoo= l, and
=C2=A0 =C2=A0 then just send "next tx, next tx, skip 1 tx, next tx, ne= xt tx, skip
=C2=A0 =C2=A0 3 tx, next tx, here's one you missed, ...", which wi= th compression
=C2=A0 =C2=A0 would probably get you to just a few /bits/ per (previously s= een)
=C2=A0 =C2=A0 transaction...=C2=A0 [0] [1]

=C2=A0e) for p2p relay, maybe it would make sense to have the protocol only=
=C2=A0 =C2=A0 allow sending blocks where all the transactions are "pre= viously
=C2=A0 =C2=A0 seen". that way if you get a block where some txes haven= 't been
=C2=A0 =C2=A0 seen before, you stall that block, and start sending transact= ions
=C2=A0 =C2=A0 through. if another block comes in in the meantime, that does= n't
=C2=A0 =C2=A0 have any new transactions, you send that block through straig= ht away.
=C2=A0 =C2=A0 that encourages sending weak blocks through first, to ensure = your
=C2=A0 =C2=A0 transactions are already in mempools and no one else can snea= k
=C2=A0 =C2=A0 in first.

Hmm... So that all seems kind of plausible to me; in how many ways am I
mistaken? :)

Cheers,
aj

[0] A hard-fork change to have the block merkle tree be ordered by txid, =C2=A0 =C2=A0 and have the transactions topologically sorted before being v= alidated
=C2=A0 =C2=A0 would be kind-of interesting here -- apart from making sortin= g
=C2=A0 =C2=A0 obvious, it'd make it easy to prove that a block doesn= 9;t contain a
=C2=A0 =C2=A0 transaction. Bit altcoin-y though...

[1] Maybe having the shared mempool indexes be sorted rather than FIFO
=C2=A0 =C2=A0 would make the data structures hard; I don't think so, bu= t not sure.


----------
From: Gregory Maxwell <gmaxwell@gmail.com>
Date:= Sun, Mar 13, 2016 at 5:06 AM
To: Anthony Towns <aj@erisian.com.au>


On Sun, Mar 13, 2016 at 3:58 AM, Anthony Towns <aj@erisian.com.au> wrote:
> On Thu, Mar 03, 2016 at 07:55:06AM +1000, Anthony Towns wrote:
>> > >=C2=A0 =C2=A0 =C2=A0 =C2=A0 - 7168 byte bloom filter
>> > FWIW, thats significantly larger than the amount of data typi= cally
>> > needed to send the whole block using the fast block relay pro= tocol.
>> Really? Hmm, if you have 2-byte indexes into the most likely to be= mined
>> 60k transactions, by 2000 transactions per block is about 4000 byt= es. So
>> I guess that makes sense. And weak blocks would make that generali= sable
>> and only add maybe a 32B index to include on the wire, presumably.=
>> It'd only take a dozen missed transactions to be longer though= .
>
> So I think there's two levels of withholding adversarial miners co= uld
> do:
>
>=C2=A0 - block withholding, so they have more time to build on top of t= heir
>=C2=A0 =C2=A0 own block, maybe increasing their effective hashrate if t= hey have
>=C2=A0 =C2=A0 above average connectivity

Also called "selfish mining".

>=C2=A0 - transaction withholding, so an entire block can be invalidated=
>=C2=A0 =C2=A0 after the fact, hitting SPV nodes. if there are SPV miner= s, this can
>=C2=A0 =C2=A0 invalidate their work (potentially profitably, if you'= ;ve accidently
>=C2=A0 =C2=A0 orphaned yourself)
> You could solve transaction withholding for miners just by saying
> "a PoW isn't valid unless the merkle tree is valid", tha= t way you
> can't retroactively invalidate a block, but then you need fast rel= ay
> before starting to mine, not just the header and some hint as to what<= br> > transactions might be included, and therefore the bloom filter idea > is pointless...

Right, this is how Bitcoin Core works (won't extend a chain it h= asn't
validated)-- but some miners have shortcutted it to reduce latency.
(And not just bypassing validation, but the whole process, e.g.
transaction selection; which historically has taken more time than
propagation).

> Having actually tried the relay network now, it seems like:
>
>=C2=A0 a) it gets less coding gain than it theoretically could; the day= or
>=C2=A0 =C2=A0 =C2=A0so's worth of blocks from Lightsword only seeme= d to be ~8x less data,
>=C2=A0 =C2=A0 =C2=A0rather than ~125x-250x, and what I'm seeing see= ms similar. So still
>=C2=A0 =C2=A0 =C2=A0room for improvement?

It's pretty variable.=C2=A0 It depends a lot on consistency betw= een the
transactions the server side selects and the client. When spam attacks
go on, or miners change their policy compression falls off until the
far end twiddles.

Go look at the distribution of the results.

>=C2=A0 c) it seems like it would be possible to adapt the relay protoco= l into
>=C2=A0 =C2=A0 =C2=A0a p2p environment to me? I'm thinking that you = provide a bidirectional
>=C2=A0 =C2=A0 =C2=A0mapping for (a subset of) your mempool for each con= nection you
>=C2=A0 =C2=A0 =C2=A0have, so that you can quickly go to/from a 2-byte i= ndex to a
>=C2=A0 =C2=A0 =C2=A0transaction. If you make it so that whoever was lis= tening gets to
>=C2=A0 =C2=A0 =C2=A0decide what transactions are okay, then you'd j= ust need 9 of these
>=C2=A0 =C2=A0 =C2=A0maps -- 1 for each of your outgoing connections (ie= , 8 total), plus
>=C2=A0 =C2=A0 =C2=A0another 1 that covers all your incoming connections= , and each map
>=C2=A0 =C2=A0 =C2=A0should only really need to use up to about a meg of= memory, which
>=C2=A0 =C2=A0 =C2=A0seems pretty feasible.=C2=A0 Maybe it means up to 8= x5MB of your mempool
>=C2=A0 =C2=A0 =C2=A0is controlled by other people's policies rather= than your own,
>=C2=A0 =C2=A0 =C2=A0but that doesn't seem to bad either.

That is a bit kludgy, but yes-- it would work.

But the key thing about latency minimization is that you _must_ send a
block with no request; because otherwise the RTT for just the request
alone will totally dominate the transfer in most cases.=C2=A0 And having N<= br> peers send you the whole block redundantly ends up hurting your
performance (esp because packet losses mean more round trips) even if
the compression is very high.

All these problems can be avoided; at least in theory. Optimal latency
mitigation would be achieved by something like block network coding
techniques:

https://en.bitcoin.it/wiki/User:Gm= axwell/block_network_coding

With these techniques peers could blindly send you data without you
requesting it, while every byte they send would usefully contribute to
your reconstruction. With extra effort and opportunistic forwarding
the entire network could, in theory, receive a block in the time it
took the original host to send only one block, while making use of a
significant fraction of the network's whole bisection bandwidth.

>=C2=A0 d) I'm a bit confused how it compares to IBLT; it seems like= IBLT has
>=C2=A0 =C2=A0 =C2=A0really strong ordering requirements to work correct= ly, but if you
>=C2=A0 =C2=A0 =C2=A0had that you could compress the fast relay protocol= really well,
>=C2=A0 =C2=A0 =C2=A0since you could apply the same ordering to your sha= red mempool, and
>=C2=A0 =C2=A0 =C2=A0then just send "next tx, next tx, skip 1 tx, n= ext tx, next tx, skip
>=C2=A0 =C2=A0 =C2=A03 tx, next tx, here's one you missed, ..."= , which with compression
>=C2=A0 =C2=A0 =C2=A0would probably get you to just a few /bits/ per (pr= eviously seen)
>=C2=A0 =C2=A0 =C2=A0transaction...=C2=A0 [0] [1]

Latency of block relay easily ends up CPU bound; even when not doing=
anything too smart (this is why Matt's relay protocol stuff has AVX
sha2 code in it). Prior IBLT implementation attempts have performance
so low that their decode time ends up dwarfing transmission time, and
plain uncoded blocks are faster for common host/bandwidth
configurations.

The ordering requirements stuff is not that relevant in my view; you
likely believe this because Gavin rat-holed himself on it trying to
spec out ordering requirements for miners...=C2=A0 The reality of it is
that a uniform permutation of, say, 4000 transactions can be stored in
log2(4000!)/8 bytes, or about 5.2kbytes (and this is easily achieved
just by using range coding to optimally pack integers in the range
[0..n_unpicked_txn) to pick transactions out of a lexagraphically
sorted list) ... and this is without any prediction at all-- randomly
ordered txn in the block would work just as well.

[E.g. using the uint coder from the daala video codec project can code
these values with about 1% overhead, and runs at about 12MB/sec doing
so on my slow laptop]

Recently some folks have been working privately on a block network
coding implementation... earlier attempts (even before IBLT became
trendy) were thwarted by the same thing that thwarts IBLT: the
decoding was so slow it dominated the latency. We've found some faster<= br> coding schemes though... so it looks like it might be practical now. I
could send you more info if you read the block network coding page and
are interested in helping.

Both IBLT and BNC would both be more useful in the weakblocks model
because there the decode speed isn't latency critical-- so if it needs<= br> 100ms of cpu time to decode an efficiently encoded block, that is no
big deal.

>=C2=A0 e) for p2p relay, maybe it would make sense to have the protocol= only
>=C2=A0 =C2=A0 =C2=A0allow sending blocks where all the transactions are= "previously
>=C2=A0 =C2=A0 =C2=A0seen". that way if you get a block where some = txes haven't been
>=C2=A0 =C2=A0 =C2=A0seen before, you stall that block, and start sendin= g transactions
>=C2=A0 =C2=A0 =C2=A0through. if another block comes in in the meantime,= that doesn't
>=C2=A0 =C2=A0 =C2=A0have any new transactions, you send that block thro= ugh straight away.
>=C2=A0 =C2=A0 =C2=A0that encourages sending weak blocks through first, = to ensure your
>=C2=A0 =C2=A0 =C2=A0transactions are already in mempools and no one els= e can sneak
>=C2=A0 =C2=A0 =C2=A0in first.

Yes, it's perfectly reasonable to do that for bandwidth minimiza= tion--
though it doesn't minimize latency.=C2=A0 "Seen" is complex, = you have no
guarantee a peer will accept any transaction you've sent it, or even that it will retain any it sent you. So multiple round trips are
required to resolve missing transactions.

We haven't bothered implementing this historically because the
bandwidth reduction is small overall, and it's not the right strategy for reducing latency-- the vast majority of bandwidth is eaten by
relay. Right now maybe 15% is used by blocks... so at most you'd get a<= br> 15% improvement here.

I did some fairly informal measurements and posted about it:
https://bitcointalk.org/index.php?topic=3D13= 77345.0

I also point out there that the existing blocksonly mode achieves
bandwidth optimal transport already (ignoring things like transaction
format compression)... just so long as you don't care about learning about unconfirmed transactions. :)

> [0] A hard-fork change to have the block merkle tree be ordered by txi= d,
>=C2=A0 =C2=A0 =C2=A0and have the transactions topologically sorted befo= re being validated
>=C2=A0 =C2=A0 =C2=A0would be kind-of interesting here -- apart from mak= ing sorting
>=C2=A0 =C2=A0 =C2=A0obvious, it'd make it easy to prove that a bloc= k doesn't contain a
>=C2=A0 =C2=A0 =C2=A0transaction. Bit altcoin-y though...

If you sort by data (or ID) without requiring the verifier to
topologically sort then an efficient permutation coder would only
spend bits on places where dependencies push things out of the
expected order... which is fairly rare.

Seems like a reasonable cost for avoiding the hardfork, no? The
receiver topo sort requirement would also require more memory in a
block verifier; and would be more complex to fraud proof, I think.

Engineering wise it's not quite so simple. It's helpful for miners = to
have blocks sorted by feerate so that later stages of the mining
process can drop the least profitable transactions simply by
truncating the block.

> [1] Maybe having the shared mempool indexes be sorted rather than FIFO=
>=C2=A0 =C2=A0 =C2=A0would make the data structures hard; I don't th= ink so, but not sure.

I tried to get Matt to do that for his stuff previously; pointing ou= t
the sorted indexes would be easier to efficiently code. His
counterargument was that for 2000 txn, the two bytes indexes take 4kb,
which is pretty insignificant... and that his time would be better
spent trying to get the hit-rate up. I found that hard to argue with.
:)

----------
From: Anthony Towns <aj@erisian.com.au>
Date: Mon= , Mar 14, 2016 at 3:08 AM
To: Gregory Maxwell <gmaxwell@gmail.com>


On Sun, Mar 13, 2016 at 05:06:25AM +0000, Gregory Maxwell wrote:<= br> > >=C2=A0 - block withholding, so they have more time to build on top= of their
> >=C2=A0 =C2=A0 own block, maybe increasing their effective hashrate= if they have
> >=C2=A0 =C2=A0 above average connectivity
> Also called "selfish mining".

Yup.

> >=C2=A0 c) it seems like it would be possible to adapt the relay pr= otocol into
> >=C2=A0 =C2=A0 =C2=A0a p2p environment to me? [...]
> That is a bit kludgy, but yes-- it would work.
> But the key thing about latency minimization is that you _must_ send a=
> block with no request; because otherwise the RTT for just the request<= br> > alone will totally dominate the transfer in most cases.=C2=A0 And havi= ng N
> peers send you the whole block redundantly ends up hurting your
> performance (esp because packet losses mean more round trips) even if<= br> > the compression is very high.

If the block can be encoded fully, then it's up to maybe 10kB pe= r block
max (at 1MB blocksize); I don't think multiple transmissions matter muc= h
in that case? Hmm, maybe it does...

> All these problems can be avoided; at least in theory. Optimal latency=
> mitigation would be achieved by something like block network coding > techniques:
> https://en.bitcoin.it/wiki/Us= er:Gmaxwell/block_network_coding

Ugh, patents. Interesting that the patents on turbo codes have expir= ed,
last time I looked they hadn't.

> With these techniques peers could blindly send you data without you > requesting it, while every byte they send would usefully contribute to=
> your reconstruction.

Yeah, that makes sense I think. Pretty complicated though. The "= ;someone
sent corrupt data" seems a little bit problematic to deal with too, especially in the "optimistically forward stuff before you can validat= e
it" phase. At least if you're using error correcting codes anyway,=
that's probably a self-solving problem.

What's with the switch from 32 bit faux ids in the original section
to 63 bits in the reimagination? I guess you use most of that for the
additional encoded length though...

Keying with the previous block's hash seems kind-of painful, doesn'= t it?
Once you receive the ids, you want to lookup the actual transactions
from your mempool, but since you can't decrypt anything useful with
only the first 50/60 bits of cyphertext, the only way to do that is
to have already cycled through all the transactions in your mempool
and pre-calculated what their network coded id for that block is, and
you have to do that everytime you receive a block (including orphans,
I guess). It'd make reorgs more expensive too, because you'd have t= o
reindex all the mempool then as well?

Maybe if you're only doing that predictively it's not so bad? The 5= MB-20MB
of txes with highest fees get coded up, and you just download any other
transactions in full? If you're downloading large coinbase txes regular= ly
anyway, that's probably no big deal.

> Latency of block relay easily ends up CPU bound; even when not doing > anything too smart (this is why Matt's relay protocol stuff has AV= X
> sha2 code in it).

Yeah, that seemed a little odd to me; there shouldn't be that mu= ch
hashing to validate a block (1MB of transactions, then maybe 128kB to
get to sha256d, then another 2*128kB for the rest of the merkle tree?).
Matt's code seems like it's doing a linear search through the tx in= dex
to find each tx though, which probably doesn't help.

> Prior IBLT implementation attempts have performance
> so low that their decode time ends up dwarfing transmission time, and<= br> > plain uncoded blocks are faster for common host/bandwidth
> configurations.

Heh.

> The ordering requirements stuff is not that relevant in my view; you > likely believe this because Gavin rat-holed himself on it trying to > spec out ordering requirements for miners...=C2=A0 The reality of it i= s
> that a uniform permutation of, say, 4000 transactions can be stored in=
> log2(4000!)/8 bytes, or about 5.2kbytes

Right, but 5.2 kB is a lot of overhead; at least compared to the cas= es
where Matt's stuff already works well :)

> Recently some folks have been working privately on a block network
> coding implementation... earlier attempts (even before IBLT became
> trendy) were thwarted by the same thing that thwarts IBLT: the
> decoding was so slow it dominated the latency. We've found some fa= ster
> coding schemes though...=C2=A0 so it looks like it might be practical = now. I
> could send you more info if you read the block network coding page and=
> are interested in helping.

Sure. (Though, fair warning, I've already failed a few times at = doing
anything useful with erasure coding...)

> >=C2=A0 e) for p2p relay, maybe it would make sense to have the pro= tocol only
> >=C2=A0 =C2=A0 =C2=A0allow sending blocks where all the transaction= s are "previously
> >=C2=A0 =C2=A0 =C2=A0seen". that way if you get a block where = some txes haven't been
> >=C2=A0 =C2=A0 =C2=A0seen before, you stall that block, and start s= ending transactions
> >=C2=A0 =C2=A0 =C2=A0through. if another block comes in in the mean= time, that doesn't
> >=C2=A0 =C2=A0 =C2=A0have any new transactions, you send that block= through straight away.
> >=C2=A0 =C2=A0 =C2=A0that encourages sending weak blocks through fi= rst, to ensure your
> >=C2=A0 =C2=A0 =C2=A0transactions are already in mempools and no on= e else can sneak
> >=C2=A0 =C2=A0 =C2=A0in first.
> Yes, it's perfectly reasonable to do that for bandwidth minimizati= on--
> though it doesn't minimize latency.=C2=A0 "Seen" is comp= lex, you have no
> guarantee a peer will accept any transaction you've sent it, or ev= en
> that it will retain any it sent you. So multiple round trips are
> required to resolve missing transactions.

The "p2p relay" in my head has "seen" meaning &q= uot;the 5MB of transactions
the listening peer thinks is most likely to be mined", odds on both peers have actually seen something like 145MB of additional transactions too. You don't do round trips; you just start sending the "unseen&= quot;
transactions automatically (by id or in full?), then you send the
compressed block. The only round trip is if you sent the id, but they
actually needed the full tx.

In my head, you get good latency if you do weak blocks beforehand,
and somewhat poorer latency if you don't. Even in my head, I'm not = sure
that's actually feasible, though: I'm not sure weak blocks for coin= base
transactions really work, and comparatively high latency on 5% of blocks that didn't get any weak blocks beforehand isn't very attractive...=

> We haven't bothered implementing this historically because the
> bandwidth reduction is small overall, and it's not the right strat= egy
> for reducing latency-- the vast majority of bandwidth is eaten by
> relay. Right now maybe 15% is used by blocks... so at most you'd g= et a
> 15% improvement here.

Yeah, I'm assuming a non-trivial increase in bandwidth usage com= pared
to current relay. Compared to relaying spam transactions (that don't get mined prior to expiry), not sure it's significant though.

> > [0] A hard-fork change to have the block merkle tree be ordered b= y txid,
> >=C2=A0 =C2=A0 =C2=A0and have the transactions topologically sorted= before being validated
> >=C2=A0 =C2=A0 =C2=A0would be kind-of interesting here -- apart fro= m making sorting
> >=C2=A0 =C2=A0 =C2=A0obvious, it'd make it easy to prove that a= block doesn't contain a
> >=C2=A0 =C2=A0 =C2=A0transaction. Bit altcoin-y though...
> If you sort by data (or ID) without requiring the verifier to
> topologically sort then an efficient permutation coder would only
> spend bits on places where dependencies push things out of the
> expected order... which is fairly rare.

Really? I was seeing a lot of transaction chains in the couple of bl= ocks I
looked at. Also, you wouldn't get short proofs that a transaction isn&#= 39;t
present in a block that way either afaics.

> Seems like a reasonable cost for avoiding the hardfork, no? The
> receiver topo sort requirement would also require more memory in a
> block verifier; and would be more complex to fraud proof, I think.

Hmm, I think it'd be easy to fraud proof -- just show adjacent m= erkle
paths where the results are in the wrong order. Maybe the same's true with the id-order-but-toposorted too -- just show adjacent merkle paths
where the results are in the wrong order, and the later doesn't depend<= br> on the former. I'm not sure that gives a unique sort though (but maybe<= br> that doesn't actually matter).

> Engineering wise it's not quite so simple. It's helpful for mi= ners to
> have blocks sorted by feerate so that later stages of the mining
> process can drop the least profitable transactions simply by
> truncating the block.

Yeah; not having ordering requirements seems far more practical.

> > [1] Maybe having the shared mempool indexes be sorted rather than= FIFO
> >=C2=A0 =C2=A0 =C2=A0would make the data structures hard; I don'= ;t think so, but not sure.
> I tried to get Matt to do that for his stuff previously; pointing out<= br> > the sorted indexes would be easier to efficiently code. His
> counterargument was that for 2000 txn, the two bytes indexes take 4kb,=
> which is pretty insignificant... and that his time would be better
> spent trying to get the hit-rate up. I found that hard to argue with.<= br> > :)

Yeah. Having the bitcoin mempool and fee info (and heck, priority in= fo)
more readily available when seeing new transactions and choosing what to include seems like it'd be helpful here. Seems relatively painful to do=
that outside of bitcoin though.

Cheers,
aj


----------
From: Gregory Maxwell <gmaxwell@gmail.com>
Date:= Mon, May 15, 2017 at 8:03 PM
To: Anthony Towns <aj@erisian.com.au>


I ran i= nto someone proposing the same thing as you. Can I share this
discussion with them? (with the public?)

----------
<= font color=3D"#888">From: Anthony Towns <= span dir=3D"ltr"><aj@erisian.com.au= >
Date: Mon, May 15, 2017 at 11:00 PM
To: Gregory Maxwe= ll <gmaxwell@gmail.com>
=

Yes, go ahead on both cou= nts.
--
Sent from my phone.


--001a1143447a97023c054f9872b8--