From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id 0EF1F8EF for ; Sun, 8 Nov 2015 14:54:08 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.7.6 Received: from mail-lf0-f53.google.com (mail-lf0-f53.google.com [209.85.215.53]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 30D52151 for ; Sun, 8 Nov 2015 14:54:06 +0000 (UTC) Received: by lffz63 with SMTP id z63so21598351lff.0 for ; Sun, 08 Nov 2015 06:54:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=+GBGyqvrWHIZPCx8wQaaJ8vCyTxSzDRo2rOG5XlqVSM=; b=muQMLrvWgivuwq/Wcm6pV9yr5tFDl+cRZ0hhO1QWDR/z3sGw4uoaJXe4CSV2lbQR3f EA9JqGQm3MO8RiK6MjdiQJma9OzUYlZiUrjy8+Z4p0MWiLzU9PUB1N/LkdiqnQKZKFwr HWOfnVhWnZPxK78cNH1vIGxbH+4kKj0LxM9BSiZyRy88yZWSsOkUI7u5ecZeJz4CnRCK rG0vxL1d8sW5vlV8Gyy+P7HtXqBFfloNO0ICTaBXBpRj0k3TKgE+hek5W9ar4S2HtHF2 7CFhOML+rnhBIL/J0Rxv6pnHzAg9RNtnwYC0n5w62azsH/Zd1IDZSw9lAyShbqXJ3BJA 8I4g== MIME-Version: 1.0 X-Received: by 10.25.154.203 with SMTP id c194mr7507962lfe.32.1446994444177; Sun, 08 Nov 2015 06:54:04 -0800 (PST) Received: by 10.25.22.95 with HTTP; Sun, 8 Nov 2015 06:54:04 -0800 (PST) In-Reply-To: References: Date: Sun, 8 Nov 2015 14:54:04 +0000 Message-ID: From: Gavin Andresen To: Adam Back Content-Type: multipart/alternative; boundary=001a11401604d6589b052408a545 X-Spam-Status: No, score=-0.8 required=5.0 tests=BAYES_40,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,HTML_MESSAGE,RCVD_IN_DNSWL_LOW autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org Cc: Bitcoin Dev Subject: Re: [bitcoin-dev] summarising security assumptions (re cost metrics) X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Bitcoin Development Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 08 Nov 2015 14:54:08 -0000 --001a11401604d6589b052408a545 Content-Type: text/plain; charset=UTF-8 On Thu, Nov 5, 2015 at 11:03 PM, Adam Back via bitcoin-dev < bitcoin-dev@lists.linuxfoundation.org> wrote: > Some thoughts, hope this is not off-topic. > > Maybe we should summarise the security assumptions and design > requirements. It is often easier to have clear design discussions by > first articulating assumptions and requirements. > > Validators: Economically dependent full nodes are an important part of > Bitcoin's security model because they assure Bitcoin security by > enforcing consensus rules. While full nodes do not have orphan > risk, we also dont want maliciously crafted blocks with pathological > validation cost to erode security by knocking reasonable spec full > nodes off the network on CPU (or bandwidth grounds). > Agreed. That is why BIP101 / BitcoinXT includes code to limit the relay and validation cost of blocks. > > Miners: Miners are in a commodity economics competitive environment > where various types of attacks and collusion, even with small > advantage, may see actual use due to the advantage being significant > relative to the at times low profit margin > Agreed, with a quibble: mining economics means they will ALWAYS have a low profit margin. > > It is quite important for bitcoin decentralisation security that small > miners not be significantly disadvantaged vs large miners. Similarly > it is important that there not be significant collusion advantages > that create policy centralisation as a side-effect (for example what > happened with "SPV mining" or validationless mining during BIP66 > deployment). Examples of attacks include selfish-mining and > amplifying that kind of attack via artificially large or > pathologically expensive to validate blocks. Or elevating orphan risk > for others (a miner or collusion of miners is not at orphan risk for a > block they created). > Okey dokey-- perhaps we should have another discussion about SPV mining, as far as I know it harmed nobody besides the miners who mindlessly created invalid, empty blocks (well, and besides being very annoying for developers who had to figure out what was happening and get the offending miners to do the right thing). In any case, it seems to me all of this (except perhaps selfish mining) is independent of the maximum block size, and solutions for all of the above (including selfish mining) should be pursued regardless of what is done with the max block size (e.g. I sent Ittay and Gun email a few minutes ago with some might-be-wong-ideas for how weak block announcements might be used to detect selfish mining). > > Validators vs Miner decentralisation balance: > > There is a tradeoff where we can tolerate weak miner decentralisation > if we can rely on good validator decentralisation or vice versa. But > both being weak is risky. Currently given mining centralisation > itself is weak, that makes validator decentralisation a critical > remaining defence - ie security depends more on validator > decentralisation than it would if mining decentralisation was in a > better shape. > I'm very disappointed you don't mention the tradeoff at "the other end of the bathtub" -- Key-holder versus Validator decentralization balance. Did you see the excellent Poon/Dryja "bathtub" presentation at Montreal? https://scalingbitcoin.org/montreal2015/presentations/Day2/3-JosephPoonAndThaddeusDryja.pdf Security: > > We should consider the pathological case not average or default behaviour > because we can not assume people will follow the defaults, only the > consensus-enforced rules. > Agreed, which is why BIP101/XT consider pathological behavior. > > We should not discount attacks that have not seen exploitation to > date. We have maybe benefitted from universal good-will (everybody > thinks Bitcoin is cool, particularly people with skills to find and > exploit attacks). > Disagree on wording: we should not ignore attacks that have not seen exploitation. But in the never-ending-list of things to be worried about and to write code for, attacks that have not been seen should be lower priority than attacks that have been seen, either in Bitcoin or elsewhere. E.g. Bitcoin has never seen a buffer-overflow attack, but we absolutely positively need to put a very high priority on the network attack surface -- we know buffer-overflow attacks are commonly exploited. On the other hand, Bitcoin has never seen a "Goldfinger attack" (take a big short position on Bitcoin, then find a way to destroy confidence so the price drops and you can profit), and "Goldfinger attacks" don't seem to be common anywhere (you don't see people taking huge short positions in companies and then bombing their factories). There might be a reason Bitcoin is more vulnerable, or the same checks-and-balances (e.g. whoever took the other side of the large short has a strong incentive to report you, and assuming you got paid in something other than Bitcoin that is probably possible). (Aside: anybody who wants to talk about the likelihood of "Goldfinger attacks" please start a thread somewhere else, I don't think that's appropriate for bitcoin-dev). > > We can consider a hierarchy of defences most secure to least: > > 1. consensus rule enforced (attacker loses block reward) > 2. economic alignment (attacker loses money) > 3. overt (profitable, but overt attacks are less likely to be exploited) > 4. meta-incentive (relying on meta-incentive to not damage the ecosystem > only) > Agreed. > Best practices: > > We might want to list some best practices that are important for the > health and security of the Bitcoin network. > > Rule of thumb KISS stuff: > > We should aim to keep things simple in general and to avoid creating > complex optimisation problems for transaction processors, wallets, > miners. > I agree with KISS. I think we can't avoid creating complex optimization problems sometimes-- see, for example, the difficulty of a wallet predicting what transaction fee is needed for a transaction to get confirmed in X blocks (lots of factors involved-- max block size, time since last block, miner policy as expressed in previous blocks, transactions currently waiting in mempool....). I do agree we should prefer simple optimization problems over complex wherever we can. > We may want to consider an incremental approach (shorter-time frame or > less technically ambitious) in the interests of simplifying and > getting something easier to arrive at consensus, and thus faster to > deploy. > Or we may want to go with something that is already tested and deployed... > > We should not let the perfect be the enemy of the good. But we should > not store new problems for the future, costs are stacked in favour of > getting it right vs A/B testing on the live network. > I disagree about "storing new problems for the future." We don't know what the problems will be in the future, so there is alway a leap of faith that future engineers will be smart enough to fix the engineering problems that arise (see the worries over quantum computing advances making ECDSA obsolete) -- ESPECIALLY if we have thumbnail sketches of solutions that we're reasonably certain will work (e.g. switching to a quantum-resistant signature algorithm via soft-fork). > > Not everything maybe fixable in one go for complexity reasons or for > the reason that there is no clear solution for some issues. We should > work incrementally. > I think the disagreement is how big a change fits into the definition of "incrementally." As Jeff Garzik has pointed out, the recent change from "we never hit the maximum block size limit" to "we regularly run into the maximum block size limit" was a large, NON-incremental change... -- -- Gavin Andresen --001a11401604d6589b052408a545 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
On T= hu, Nov 5, 2015 at 11:03 PM, Adam Back via bitcoin-dev &l= t;bitcoin-dev@lists.linuxfoundation.org> wrote:
Some thoughts, hope this is not off-topic.

Maybe we should summarise the security assumptions and design
requirements.=C2=A0 It is often easier to have clear design discussions by<= br> first articulating assumptions and requirements.

Validators: Economically dependent full nodes are an important part of
Bitcoin's security model because they assure Bitcoin security by
enforcing consensus rules.=C2=A0 While full nodes do not have orphan
risk, we also dont want maliciously crafted blocks with pathological
validation cost to erode security by knocking reasonable spec full
nodes off the network on CPU (or bandwidth grounds).
<= br>
Agreed. That is why BIP101 / BitcoinXT includes code to limit= the relay and validation cost of blocks.
=C2=A0

Miners: Miners are in a commodity economics competitive environment
where various types of attacks and collusion, even with small
advantage, may see actual use due to the advantage being significant
relative to the at times low profit margin

<= div>Agreed, with a quibble: mining economics means they will ALWAYS have a = low profit margin.
=C2=A0

It is quite important for bitcoin decentralisation security that small
miners not be significantly disadvantaged vs large miners.=C2=A0 Similarly<= br> it is important that there not be significant collusion advantages
that create policy centralisation as a side-effect (for example what
happened with "SPV mining" or validationless mining during BIP66<= br> deployment).=C2=A0 Examples of attacks include selfish-mining and
amplifying that kind of attack via artificially large or
pathologically expensive to validate blocks.=C2=A0 Or elevating orphan risk=
for others (a miner or collusion of miners is not at orphan risk for a
block they created).

Okey dokey-- perha= ps we should have another discussion about SPV mining, as far as I know it = harmed nobody besides the miners who mindlessly created invalid, empty bloc= ks (well, and besides being very annoying for developers who had to figure = out what was happening and get the offending miners to do the right thing).=

In any case, it seems to me all of this (except p= erhaps selfish mining) is independent of the maximum block size, and soluti= ons for all of the above (including selfish mining) should be pursued regar= dless of what is done with the max block size (e.g. I sent Ittay and Gun em= ail a few minutes ago with some might-be-wong-ideas for how weak block anno= uncements might be used to detect selfish mining).=C2=A0
=C2=A0

Validators vs Miner decentralisation balance:

There is a tradeoff where we can tolerate weak miner decentralisation
if we can rely on good validator decentralisation or vice versa.=C2=A0 But<= br> both being weak is risky.=C2=A0 Currently given mining centralisation
itself is weak, that makes validator decentralisation a critical
remaining defence - ie security depends more on validator
decentralisation than it would if mining decentralisation was in a
better shape.

I'm very disappointed= you don't mention the tradeoff at "the other end of the bathtub&q= uot; -- Key-holder versus Validator decentralization balance. Did you see t= he excellent Poon/Dryja "bathtub" presentation at Montreal?
=
=C2=A0 =C2=A0=C2=A0https://scalingbitcoin= .org/montreal2015/presentations/Day2/3-JosephPoonAndThaddeusDryja.pdf

Security:

We should consider the pathological case not average or default behaviour because we can not assume people will follow the defaults, only the
consensus-enforced rules.

Agreed, which= is why BIP101/XT consider pathological behavior.
=C2=A0

We should not discount attacks that have not seen exploitation to
date.=C2=A0 We have maybe benefitted from universal good-will (everybody thinks Bitcoin is cool, particularly people with skills to find and
exploit attacks).

Disagree on wording: = we should not ignore attacks that have not seen exploitation. But in the ne= ver-ending-list of things to be worried about and to write code for, attack= s that have not been seen should be lower priority than attacks that have b= een seen, either in Bitcoin or elsewhere.

E.g. Bit= coin has never seen a buffer-overflow attack, but we absolutely positively = need to put a very high priority on the network attack surface -- we know b= uffer-overflow attacks are commonly exploited.

On = the other hand, Bitcoin has never seen a "Goldfinger attack" (tak= e a big short position on Bitcoin, then find a way to destroy confidence so= the price drops and you can profit), and "Goldfinger attacks" do= n't seem to be common anywhere (you don't see people taking huge sh= ort positions in companies and then bombing their factories). There might b= e a reason Bitcoin is more vulnerable, or the same checks-and-balances (e.g= . whoever took the other side of the large short has a strong incentive to = report you, and assuming you got paid in something other than Bitcoin that = is probably possible).
=C2=A0 (Aside: anybody who wants to talk a= bout the likelihood of "Goldfinger attacks" please start a thread= somewhere else, I don't think that's appropriate for bitcoin-dev).=
=C2=A0

We can consider a hierarchy of defences most secure to least:

1. consensus rule enforced (attacker loses block reward)
2. economic alignment (attacker loses money)
3. overt (profitable, but overt attacks are less likely to be exploited) 4. meta-incentive (relying on meta-incentive to not damage the ecosystem on= ly)

Agreed.
=C2=A0
Best practices:

We might want to list some best practices that are important for the
health and security of the Bitcoin network.

Rule of thumb KISS stuff:

We should aim to keep things simple in general and to avoid creating
complex optimisation problems for transaction processors, wallets,
miners.

I agree with KISS.
I think we can't avoid creating complex optimization probl= ems sometimes-- see, for example, the difficulty of a wallet predicting wha= t transaction fee is needed for a transaction to get confirmed in X blocks = (lots of factors involved-- max block size, time since last block, miner po= licy as expressed in previous blocks, transactions currently waiting in mem= pool....).=C2=A0 I do agree we should prefer simple optimization problems o= ver complex wherever we can.

=C2=A0
We may want to consider an incremental approach (shorter-time frame or
less technically ambitious) in the interests of simplifying and
getting something easier to arrive at consensus, and thus faster to
deploy.

Or we may want to go with somet= hing that is already tested and deployed...
=C2=A0

We should not let the perfect be the enemy of the good.=C2=A0 But we should=
not store new problems for the future, costs are stacked in favour of
getting it right vs A/B testing on the live network.
<= br>
I disagree about "storing new problems for the future.&q= uot; =C2=A0We don't know what the problems will be in the future, so th= ere is alway a leap of faith that future engineers will be smart enough to = fix the engineering problems that arise (see the worries over quantum compu= ting advances making ECDSA obsolete) -- ESPECIALLY if we have thumbnail ske= tches of solutions that we're reasonably certain will work (e.g. switch= ing to a quantum-resistant signature algorithm via soft-fork).
= =C2=A0

Not everything maybe fixable in one go for complexity reasons or for
the reason that there is no clear solution for some issues.=C2=A0 We should=
work incrementally.

I t= hink the disagreement is how big a change fits into the definition of "= ;incrementally."

As Jeff Garzik has pointed out, the recent change from &qu= ot;we never hit the maximum block size limit" to "we regularly ru= n into the maximum block size limit" was a large, NON-incremental chan= ge...

--
--
Gavin And= resen

--001a11401604d6589b052408a545--