In Step 3.3, 0xFD causes an error, but it could be used to indicate a parameterized dictionary entry, for example 0xFD, 0x01 followed by eight more bytes to be interpreted according to steps 3.1 or 3.5 could mean OP_DUP OP_HASH160 (20 bytes from the blockchain dictionary)
OP_EQUALVERIFY OP_CHECKSIG, replacing that very common occurrence of 24 bytes with 10 bytes. Well, 11 if you include the 0x00 required by step5. But that only works on addresses that have spent inputs. Or 0xFD, 0x02 could be shorthand for the four zeroes of lock_time, followed by Version (1), followed by 0x01 (for one-input transactions), turning nine bytes into two for the data at the end of a normal (lock_time = 0) Txn and the beginning of a single-input Txn. But I left 0xFD as an error because those gains didn't seem as frequent as the others.
Dave.
On 30/11/2015 9:28 PM, Matt Corallo wrote:
Why scary? LZO has no current security issues, and it will be configureable by each node operator so it can be turned off completely if needed or desired.I'm really not a fan of this at all. To start with, adding a compression library that is directly accessible to the network on financial software is a really, really scary idea.
Why is 15% at the low end, to 27% at the high end not good? It sounds like a very good boost.If there were a massive improvement, I'd find it acceptable, but the improvement you've shown really isn't all that much.
I only did the compression up to the 200,000 block to better isolate the transmission of data from the post processing of blocks and determine whether the compressing of data was adding to much to the total transmission time.The numbers you recently posted show it improving the very beginning of IBD somewhat over high-latency connections, but if we're throughput-limited after the very beginning of IBD, we should fix that, not compress the blocks.
I think it's clear from the data that as the data (blocks, transactions) increase in size that (1) they compress better and (2) they have a bigger and positive impact on improving performance when compressed.
From the table below, at 120000 blocks the time to sync the chain was roughly the same for compressed vs. uncompressed however after that point as block sizes start increasing, all compression libraries peformed much faster than uncompressed. The data provided in this testing clearly shows that as block size increases, the performance improvement by compressing data also increases.Additionally, I'd be very surprised if this had any significant effect on the speed at which new blocks traverse the network (do you have any simulations or other thoughts on this?).
TABLE 5:
Results shown in seconds with 60ms of induced latency
Num blks sync'd Uncmp Zlib-1 Zlib-6 LZO1x-1 LZO1x-999
--------------- ----- ------ ------ ------- ---------
120000 3226 3416 3397 3266 3302
130000 4010 3983 3773 3625 3703
140000 4914 4503 4292 4127 4287
150000 5806 4928 4719 4529 4821
160000 6674 5249 5164 4840 5314
170000 7563 5603 5669 5289 6002
180000 8477 6054 6268 5858 6638
190000 9843 7085 7278 6868 7679
200000 11338 8215 8433 8044 8795
As far as, what happens after the block is received, then obviously compression isn't going to help in post processing and validating the block, but in the pure transmission of the object it most certainly and logically does and in a fairly direct proportion to the file size (a file that is 20% smaller will be transmited "at least" 20% faster, you can use any data transfer time calculator for that). The only issue, that I can see that required testing was to show how much compression there would be, and how much time the compression of the data would add to the sending of the data.
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev