From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from sog-mx-3.v43.ch3.sourceforge.com ([172.29.43.193] helo=mx.sourceforge.net) by sfs-ml-2.v29.ch3.sourceforge.com with esmtp (Exim 4.76) (envelope-from ) id 1WuLh6-0007n0-Pv for bitcoin-development@lists.sourceforge.net; Tue, 10 Jun 2014 13:02:28 +0000 Received-SPF: pass (sog-mx-3.v43.ch3.sourceforge.com: domain of bitpay.com designates 74.125.82.44 as permitted sender) client-ip=74.125.82.44; envelope-from=jgarzik@bitpay.com; helo=mail-wg0-f44.google.com; Received: from mail-wg0-f44.google.com ([74.125.82.44]) by sog-mx-3.v43.ch3.sourceforge.com with esmtps (TLSv1:RC4-SHA:128) (Exim 4.76) id 1WuLh4-00046q-Ty for bitcoin-development@lists.sourceforge.net; Tue, 10 Jun 2014 13:02:28 +0000 Received: by mail-wg0-f44.google.com with SMTP id x13so4656283wgg.27 for ; Tue, 10 Jun 2014 06:02:20 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc:content-type; bh=z6tEz80v4O+LOGQf9eSyNN5bFDdt9B1WM1FDYeavo5I=; b=M1egbwAkcHYYvmYrAKgY26n0MbTUv+PTkiYWA1WThpIfugHgrnlkBd2XHB90AOSFhv DZ8agttEBgfWeUOC7Ewj9S6SFBIJK1WYa5NXvjYhuAi6x2CuWi2KEQb+NfW/1vsczIga abGa4XCAn8HtRbNOjVkKGwfEeD0ZKIIVusS6b768HpOnLT6OKWz8VujAg6dKiF8ZPuQG 2iBYw/6ZfCCuMEb5yfbNw30WwBCTnGAx61FOY+X6DkmCf31pxDflQ+5bIjCy4UwrpRsq DWfQyVCAxtv69V1hkg8UORTC8mNe7nhbonUB8QXWxjkagQiZp6EIsC5BUlFGsYxKjxn+ /zTQ== X-Gm-Message-State: ALoCoQluyPxYs4PW7PwjlCFACD6WWShGNjzWqbWd7ND4c6s8Iyi8cHjro7TDsm7qjcyiQnPv0TlC X-Received: by 10.180.73.66 with SMTP id j2mr37369789wiv.36.1402405340506; Tue, 10 Jun 2014 06:02:20 -0700 (PDT) MIME-Version: 1.0 Received: by 10.195.12.3 with HTTP; Tue, 10 Jun 2014 06:02:00 -0700 (PDT) In-Reply-To: References: <20140606081933.GA29458@savin> <20140606084852.GA30247@netbook.cypherspace.org> <20140606090441.GA19256@savin> <20140606104543.GA31085@netbook.cypherspace.org> <20140606164639.GB14891@savin> <20140606170524.GA29195@savin> <20140606174545.GB29195@savin> <20140608213534.GA4191@savin> From: Jeff Garzik Date: Tue, 10 Jun 2014 09:02:00 -0400 Message-ID: To: Mike Hearn Content-Type: text/plain; charset=UTF-8 X-Spam-Score: -1.6 (-) X-Spam-Report: Spam Filtering performed by mx.sourceforge.net. See http://spamassassin.org/tag/ for more details. -1.5 SPF_CHECK_PASS SPF reports sender host as permitted sender for sender-domain -0.0 SPF_PASS SPF: sender matches SPF record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature X-Headers-End: 1WuLh4-00046q-Ty Cc: Bitcoin Dev Subject: Re: [Bitcoin-development] Bloom bait X-BeenThere: bitcoin-development@lists.sourceforge.net X-Mailman-Version: 2.1.9 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 10 Jun 2014 13:02:28 -0000 Most of this description of disk activity is true, but it omits one key point: Total cached data (working set). It is a binary, first order question: are you hitting pagecache, or the disk? When nodes act as archival data sources, the pagecache pressure is immense. When nodes just primarily serve recent blocks, that data is being served out of pagecache. As I directly observed running public nodes, the disks were running constantly, impacting all clients, even clients downloading only recent blocks. Luckily, headers are served out of RAM, so that part of the sync is always fast. NODE_BLOOM -- and block download in general -- will tend to be slower than it could be, due to the working set almost always being larger than available pagecache. Fix that problem, NODE_BLOOM will always operate out of pagecache, and disk activity will not be an issue. Once you start hitting the disk, you've already lost. On Tue, Jun 10, 2014 at 6:38 AM, Mike Hearn wrote: >> As I explained in the email you're replying to and didn't quote, bloom >> filters has O(n) cost per query, so sending different bloom filters to >> different peers for privacy reasons costs the network significant disk >> IO resources. If I were to actually implement it it'd look like a DoS >> attack on the network. > > > DoS attack? Nice try. > > Performance is subtle, disk iops especially so. I suspect you'd find - if > you implemented it - that for the kinds of loads Bitcoin is processing both > today and tomorrow prefix filtering either doesn't save any disk seeks or > actively makes it worse. > > Consider a client that is syncing the last 24 hours of chain. bitcoind > pre-allocates space for blocks in large chunks, so most blocks are laid out > sequentially on disk. Almost all the cost of a disk read is rotational > latency. Once the head is in place data can be read in very fast and modern > kernels will attempt to adaptively read ahead in order to exploit this, > especially if a program seems to be working through a disk file > sequentially. The work of Bloom filtering parts of the chain for this client > boils down to a handful of disk seeks at best and the rest of the work is > all CPU/memory bound as the block is parsed into objects and tested against > the filter. A smarter filtering implementation than ours could do SAX-style > parsing of the block and avoid the overhead of turning it all into objects. > > Now consider a prefix filtering implementation. You need to calculate a > sorted list of all the data elements and tx hashes in the block, that maps > to the location in the block where the tx data can be found. These per-block > indexes take up extra disk space and, realistically, would likely be > implemented using LevelDB as that's a tool which is designed for creating > and using these kinds of tables, so then you're both loading the block data > itself (blocks are sized about right currently to always fit in the default > kernel readahead window) AND also seeking through the indexes, and building > them too. A smart implementation might try and pack the index next to each > block so it's possible to load both at once with a single seek, but that > would probably be more work, as it'd force building of the index to be > synchronous with saving the block to disk thus slowing down block relay. In > contrast a LevelDB based index would do the bulk of the index-building work > on a separate core. > > At some block size and client load the additional data storage and increased > pressure on the page cache would probably make it worthwhile. But I find it > unlikely to be true at current traffic levels, or double or triple today's > levels. So I'd rather we spend our very limited collective time on finding > ways to increase usage rather than worrying about resources which are not > presently scarce. > > (as an aside, some of the above analysis would be invalidated if most nodes > end up running on SSDs, but I doubt most are. It'd be neat to export storage > tech via some kind of stats message - LevelDB is designed for HDDs not SSDs > so at some point a new storage subsystem might make sense if the network > switched over). > > > > ------------------------------------------------------------------------------ > HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions > Find What Matters Most in Your Big Data with HPCC Systems > Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. > Leverages Graph Analysis for Fast Processing & Easy Data Exploration > http://p.sf.net/sfu/hpccsystems > _______________________________________________ > Bitcoin-development mailing list > Bitcoin-development@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/bitcoin-development > -- Jeff Garzik Bitcoin core developer and open source evangelist BitPay, Inc. https://bitpay.com/