public inbox for bitcoindev@googlegroups.com
 help / color / mirror / Atom feed
From: Bradley Denby <bdenby@cmu.edu>
To: Bitcoin Protocol Discussion <bitcoin-dev@lists.linuxfoundation.org>
Subject: Re: [bitcoin-dev] BIP proposal - Dandelion: Privacy Preserving Transaction Propagation
Date: Mon, 11 Jun 2018 10:31:09 -0400	[thread overview]
Message-ID: <CAGq_bNKj4rA9pzk7CPA0r099PXOy3naNfZsr=MSPpYh08OZ6TQ@mail.gmail.com> (raw)
In-Reply-To: <CAPg+sBjdTmZ4m5c92CQK5DsU18M=GKgTM-OZZzwgjpE3hqe6=w@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 7958 bytes --]

Thanks for the comments Pieter!

We can make descriptions for the intended node behaviors more clear in the
BIP.

Regarding interaction with BIPs 37 and 133, we have found that if Dandelion
routing decisions are based on self-reported features, malicious nodes can
often exploit that to launch serious deanonymization attacks. As a result,
we recommend not allowing fee filters from peers to influence the choice of
route. Your suggestion of automatically fluffing is a good solution.
Another (similar) option would be to apply fee filters in the stempool.
This would prevent the tx from propagating in stem phase, so eventually an
embargo timer on the stem will expire and the transaction will fluff. This
is slower than auto-fluffing, but requires (slightly) less code.

Regarding mempool-dependent transactions, the reference implementation adds
any mempool transactions to the stempool but not vice-versa so that the
stempool becomes a superset of the mempool. In other words, information is
free to flow from the mempool to the stempool. Information does not flow
from the stempool to the mempool except when a transaction fluffs. As a
result, a node's stempool should accept and propagate Dandelion
transactions that depend on other unconfirmed normal mempool transactions.
The behavior you described is not intended; if you have any tests
demonstrating this behavior, would you mind sharing them?

Orphans: stem orphans can occur when a node on the stem shuffles its route
between sending dependent transactions. One way to deal with this issue
would be to re-broadcast all previous Dandelion transactions that have not
been fluffed after Dandelion route shuffling. This could add a fair amount
of data and logic. This re-broadcast method also telegraphs the fact that a
Dandelion shuffle has taken place and can result in bursts of transactions
depending on traffic patterns. A second option (which we used in the
reference implementation) is to wait for the fluff phase to begin, at which
point the orphans will be resolved. This should happen within 15 seconds
for most transactions. Do you have any thoughts on which option would be
more palatable? Or if there are other options we have missed?

Regarding preferred connections, we have found that making Dandelion
routing decisions based on claims made by peer nodes can cause problems and
therefore would recommend against biasing the peer selection code.

On the implementation side:

* We apply the same logic to the stempool as the mempool in the reference
implementation. The stempool should remain a superset of the mempool to
allow for proper handling of mempool-dependent transactions.

* We'll take a look at setDandelionInventoryKnown.

* We will look into using scheduler jobs instead of a separate
thread--could you point us towards somewhere else in the code that uses a
scheduler job?

Based on the feedback we have received so far, we are planning to
prioritize writing up a clearer spec for node behavior in the BIP. Does
that seem reasonable, or are there other issues that are more pressing at
this point?

Cheers

On Wed, Jun 6, 2018 at 12:01 AM, Pieter Wuille <pieter.wuille@gmail.com>
wrote:

> On Thu, May 10, 2018 at 5:59 AM, Bradley Denby via bitcoin-dev
> <bitcoin-dev@lists.linuxfoundation.org> wrote:
> > Hi all,
> >
> > ...
> >
> > This iteration of Dandelion has been tested on our own small network,
> and we
> > would like to get the implementation in front of a wider audience. An
> > updated
> > BIP document with further details on motivation, specification,
> > compatibility,
> > and implementation is located here:
> > https://github.com/mablem8/bips/blob/master/bip-dandelion.mediawiki
>
> Hi Bradley,
>
> thank you for working on this and going as far as implementing the
> entire protocol. It looks like a very well-worked out idea already,
> and its semantics can probably be adopted pretty much as-is. It would
> be very exciting to bring these kinds of privacy improvements to
> Bitcoin's P2P protocol.
>
> I do have a number of comments on the specification and suggested
> implementation in Bitcoin Core. I'm dumping my thoughts here, though
> at this stage the specification is probably more important. The
> implementation can be discussed more thoroughly when there is a PR
> open.
>
> Specification
>
> * Overall, I think it would be worthwhile to describe the intended
> node behavior in the BIP, at a higher level than Bitcoin Core
> patchsets, but more detailed than what is in the BIP now. The
> patch-based descriptions are both hard to read for developers working
> on different systems who are unfamiliar with the Core codebase, and
> don't make it clear to what extent implementation decisions are local
> policy (which can be changed without network coordination), and which
> follow from security or privacy arguments for the protocol.
>
> * Interaction with feefilter (BIP 133) and Bloom filter (BIP 37). When
> peers have given us filters on what transactions they will accept,
> should Dandelion transactions be subject to the same? Should it
> influence the choice of route? One simple possibility is perhaps to
> avoid choosing BIP37 peers as Dandelion routes, and treat transactions
> that do not pass the feefilter for its
> would-be-outgoing-Dandelion-route as an automatic fluff - justified by
> noting that relaying a transaction close to what fee is acceptable to
> the network's mempools is already less likely to get good privacy due
> to reduced chances of propagation.
>
> * Mempool dependant transactions. It looks like the current
> implementation accepts Dandelion transactions which are dependant on
> other Dandelion (stempool) transactions and on confirmed blockchain
> transactions, but not ones that are dependant on other unconfirmed
> normal mempool transactions. Is this intentional, or resulting from a
> difficulty in implementing this? Should the correct behaviour be
> specified, or left free for nodes to decide?
>
> * Orphan transactions. It looks like the current implementation
> assumes no orphan transactions, but in a dynamic network (especially
> with occasionally shuffling of Dandelion routes), I expect that
> sometimes a dependent transaction will go on a different route than
> its parent. Do you have any thoughts about that (even if not addressed
> in a very implementation). Could we have a Dandelion-orphan-pool of
> transactions, similar to the normal mempool has a set of orphan
> transactions?
>
> * Preferred connections. Should we bias the outgoing connection peer
> selection code to prefer Dandelion-capable peers when the number is
> too low?
>
> Implementation
>
> * How do we control the size of the stempool? Should acceptance of a
> transaction to the normal mempool and/or blockchain result in eviction
> of it (and conflicts) from the stempool? The existing code
> intentionally has an upper bound on the size of the mempool to assure
> predictable resource usage - the introduction of the stempool
> shouldn't change that.
>
> * I don't think you need to fully materialize all the routes. Instead,
> you can just maintain a vector of 2 selected Dandelion-supporting
> peers (and if one disconnects, replace just that one with another
> one). To map incoming peers to an index in that list of peers, you can
> use deterministic randomness (see SipHasher in the source code) with
> the incoming node_id as data and a single global secret nonce (chosen
> at startup, and reset on reshuffle).
>
> * setDandelionInventoryKnown looks like it can grow unboundedly. A
> rolling Bloom filter (like used for filterInventoryKnown) is perhaps
> easier to guarantee predictable memory usage for.
>
> * Use a scheduler job instead of a separate thread for shuffling the
> routes (extra threads use unnecessarily large amounts of memory).
>
> * (nit) coding style: doc/developer-notes.md has a number of
> guidelines on coding style you may want to check out.
>
> Cheers,
>
> --
> Pieter
>

[-- Attachment #2: Type: text/html, Size: 9205 bytes --]

  reply	other threads:[~2018-06-11 14:31 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-10 12:59 [bitcoin-dev] BIP proposal - Dandelion: Privacy Preserving Transaction Propagation Bradley Denby
2018-06-04 20:29 ` Bradley Denby
2018-07-05 14:50   ` Neudecker, Till (TM)
2018-06-06  4:01 ` Pieter Wuille
2018-06-11 14:31   ` Bradley Denby [this message]
2018-06-12  1:05     ` Pieter Wuille
2018-06-26  0:12       ` Bradley Denby
2018-06-26  5:20         ` Gregory Maxwell
     [not found] <mailman.3774.1530901879.19027.bitcoin-dev@lists.linuxfoundation.org>
2018-07-08 12:50 ` Giulia Fanti
  -- strict thread matches above, loose matches on Subject: below --
2017-09-21  2:10 Giulia Fanti
2017-06-12 14:46 Andrew Miller
2017-06-13  1:00 ` Gregory Maxwell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAGq_bNKj4rA9pzk7CPA0r099PXOy3naNfZsr=MSPpYh08OZ6TQ@mail.gmail.com' \
    --to=bdenby@cmu.edu \
    --cc=bitcoin-dev@lists.linuxfoundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox