From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from sog-mx-1.v43.ch3.sourceforge.com ([172.29.43.191] helo=mx.sourceforge.net) by sfs-ml-2.v29.ch3.sourceforge.com with esmtp (Exim 4.76) (envelope-from ) id 1X7tLO-0000P4-Nf for bitcoin-development@lists.sourceforge.net; Thu, 17 Jul 2014 21:36:02 +0000 Received-SPF: pass (sog-mx-1.v43.ch3.sourceforge.com: domain of gmail.com designates 209.85.219.46 as permitted sender) client-ip=209.85.219.46; envelope-from=keziahw@gmail.com; helo=mail-oa0-f46.google.com; Received: from mail-oa0-f46.google.com ([209.85.219.46]) by sog-mx-1.v43.ch3.sourceforge.com with esmtps (TLSv1:RC4-SHA:128) (Exim 4.76) id 1X7tLN-0007PI-KN for bitcoin-development@lists.sourceforge.net; Thu, 17 Jul 2014 21:36:02 +0000 Received: by mail-oa0-f46.google.com with SMTP id m1so1687017oag.33 for ; Thu, 17 Jul 2014 14:35:56 -0700 (PDT) X-Received: by 10.60.65.170 with SMTP id y10mr49017452oes.45.1405632955987; Thu, 17 Jul 2014 14:35:55 -0700 (PDT) MIME-Version: 1.0 Received: by 10.202.98.11 with HTTP; Thu, 17 Jul 2014 14:35:35 -0700 (PDT) From: Kaz Wesley Date: Thu, 17 Jul 2014 14:35:35 -0700 Message-ID: To: bitcoin-development@lists.sourceforge.net Content-Type: multipart/alternative; boundary=001a11c2119406c1e904fe6a6dc9 X-Spam-Score: -0.6 (/) X-Spam-Report: Spam Filtering performed by mx.sourceforge.net. See http://spamassassin.org/tag/ for more details. -1.5 SPF_CHECK_PASS SPF reports sender host as permitted sender for sender-domain 0.0 FREEMAIL_FROM Sender email is commonly abused enduser mail provider (keziahw[at]gmail.com) -0.0 SPF_PASS SPF: sender matches SPF record 1.0 HTML_MESSAGE BODY: HTML included in message -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature X-Headers-End: 1X7tLN-0007PI-KN Subject: [Bitcoin-development] Squashing redundant tx data in blocks on the wire X-BeenThere: bitcoin-development@lists.sourceforge.net X-Mailman-Version: 2.1.9 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Jul 2014 21:36:02 -0000 --001a11c2119406c1e904fe6a6dc9 Content-Type: text/plain; charset=UTF-8 OVERVIEW To improve block propagation, add a new block message that doesn't include transactions the peer is known to have. The message must never require an additional round trip due to any transactions the peer doesn't have, but should be compatible with peers sometimes forgetting transactions they have known. APPROACH For peers advertising support for squashed blocks: a node tracks what txes it knows each peer has seen (inv received, tx sent, tx appeared in competing block known to peer). Nodes push block contents as txes-not-already-known + txids-known. A node should be able to forget invs it has seen without invalidating what peers know about its known txes. To allow for this, a node assembles a bloom filter of a set of txes it is going to forget, and sends it to peers. The node can erase the txes as soon as no blocks requested before the filter was pushed are in flight (relying on the assumption that messages can be expected to be processed in order). When a node receives a forgotten-filter, it ORs it into its forgotten-filter for that peer. Any transactions matching the forgotten-filter are always included in full with a block. If the filter is getting full, the node can just clear it along with peer.setTxKnown. COSTS Bloom filtering: Since the bloom filter is likely to grow slowly and can be dropped when it is becoming full, a cheap set of hash functions and element size can be used to keep overhead more restricted than the bloom filtering done for spv. It's important for testing txes against the filter to be fast so that it doesn't delay pushing the block more than the squashing helps. Nodes currently forget txes rarely, so the bloom filters would only need to be used at all under conditions that are not currently common -- but I think they're important to include to allow for different node behavior in this regard in the future. Tracking txes known to peers: A multimap of txid->peerId would obviate the current setCurrentlyKnown, and would not take much more space since each additional peer adds about 1 peerId per txid (setCurrentlyKnown keeps a uint256 per peer per txid, although it tracks somewhat fewer txid per node). Potential vulnerabilities: - Since the bloom filters will have lower maximum overhead than the current SPV filters and can be dropped at will, this shouldn't enable any resource exhaustion attacks that aren't already possible. - A squashed block with bogus or missing data would be easily detected not to produce the correct merkle root for its BlockHeader. BENEFITS Assuming a fairly typical 500 tx block with transaction sizes averaging 300b (both on the low side), for a 150kb block: % pruned | block size reduction | relative size reduction -------- | -------------------- | ----------------------- 100 | 134 kB | 89% 50 | 67 kB | 45% 25 | 33.5 kB | 17% I've been doing some logging, and when my node pushes a block to a peer it seems to typically know that a peer has seen most of the txes in the block. Even in the case of a small block with only 25% known-known transactions, total network bandwidth saved is greater than the bloom filters transmitted unless a node is forgetting transactions so rapidly that it pushes new maximum-size forget-filters every block. So this is a net gain even in total bandwidth usage, but most importantly it's an improvement in block propagation rate and in how block propagation rate scales with additional transactions. IMPLEMENTATION QUESTIONS How should block squashing capability be advertised -- new service bit? Bloom filters: - How fast to test against could a suitable bloom filter be made? - How much memory would each filter need to take, at maximum? - Can the inputs all being 32 byte hashes be used to optimize filter hash calculations? ROADMAP If there's support for this proposal, I can begin working on the specific implementation details, such as the bloom filters, message format, and capability advertisment, and draft a BIP once I have a concrete proposal for what those would look like and a corresponding precise cost/benefit analysis. --kaz --001a11c2119406c1e904fe6a6dc9 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
OVERVIEW

To improve block pr= opagation, add a new block message that doesn't include
trans= actions the peer is known to have. The message must never require an
additional round trip due to any transactions the peer doesn't hav= e, but should
be compatible with peers sometimes forgetting trans= actions they have known.

APPROACH

For peers advertising support for squashed blocks: a node tr= acks what txes it
knows each peer has seen (inv received, tx sent= , tx appeared in competing block
known to peer). Nodes push block= contents as txes-not-already-known +
txids-known.

A node should be able to forget = invs it has seen without invalidating what peers
know about its k= nown txes. To allow for this, a node assembles a bloom filter of
a set of txes it is going to forget, and sends it to peers. The node can er= ase
the txes as soon as no blocks requested before the filter was= pushed are in
flight (relying on the assumption that messages ca= n be expected to be processed
in order).

When a node receives a forgotten-f= ilter, it ORs it into its forgotten-filter for
that peer. Any tra= nsactions matching the forgotten-filter are always included in
full with a block. If the filter is getting full, the node can just clear i= t
along with peer.setTxKnown.

COSTS

Bloom filtering:
Since the bloom filter is l= ikely to grow slowly and can be dropped when it is
becoming full, a cheap set of hash functions and element size can be u= sed to
keep overhead more restricted than the bloom filtering don= e for spv. It's
important for testing txes against the filter= to be fast so that it doesn't
delay pushing the block more than the squashing helps.
Nodes= currently forget txes rarely, so the bloom filters would only need to be
used at all under conditions that are not currently common -- but = I think
they're important to include to allow for different node behavior = in this regard
in the future.

Tracking t= xes known to peers:
A multimap of txid->peerId would obviate t= he current setCurrentlyKnown, and
would not take much more space since each additional peer adds about 1= peerId
per txid (setCurrentlyKnown keeps a uint256 per peer per = txid, although it
tracks somewhat fewer txid per node).

Potential vulnerabilities:
- Since the bloom = filters will have lower maximum overhead than the current SPV
=C2= =A0 filters and can be dropped at will, this shouldn't enable any resou= rce
=C2=A0 exhaustion attacks that aren't already possible.
= - A squashed block with bogus or missing data would be easily detected not = to
=C2=A0 produce the correct merkle root for its BlockHeader.

BENEFITS

Assuming a fairly typical = 500 tx block with transaction sizes averaging 300b
(both on the l= ow side), for a 150kb block:

% pruned | block size= reduction | relative size reduction
-------- | -------------------- | -----------------------
10= 0 =C2=A0 =C2=A0 =C2=A0| 134 kB =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 | 89%
50 =C2=A0 =C2=A0 =C2=A0 | 67 kB =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0| 45%
25 =C2=A0 =C2=A0 =C2=A0 = | 33.5 kB =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0| 17%
<= br>
I've been doing some logging, and when my node pushes a bloc= k to a peer it seems
to typically know that a peer has seen most = of the txes in the block. Even in
the case of a small block with = only 25% known-known transactions, total network
bandwidth saved is greater than the bloom filters transmitted unless a= node is
forgetting transactions so rapidly that it pushes new ma= ximum-size
forget-filters every block.

So this is a net gain even in total bandwidth usage, but most importantly i= t's
an improvement in block propagation rate and in how block= propagation rate
scales with additional transactions.

IMPLEMENTATION QUESTIONS

How should= block squashing capability be advertised -- new service bit?
Bloom filters:
- How fast to test against could a sui= table bloom filter be made?
- How much memory would each filter need to take, at maximum?
- Can the inputs all being 32 byte hashes be used to optimize filter hash=
=C2=A0 calculations?

ROADMAP

If there's support for this proposal, I can begin working on= the specific
implementation details, such as the bloom filters, = message format, and
capability advertisment, and draft a BIP once= I have a concrete proposal for
what those would look like and a corresponding precise cost/benefit an= alysis.

--kaz
--001a11c2119406c1e904fe6a6dc9--