Happy Lunar New Year Everyone!

Gavin,

> I suspect there ARE a significant percentage of un-maintained full nodes-- probably 30 to 40%. Losing those nodes will not be a problem, for three reasons:

The notion of large set ( 30% to 40% ) of un-maintained full nodes are not evident on the network. below is data based on a personal snap shot taken around Dec, 2015. with the following assumptions.
1) nodes running non standard version strings are considered a preference by the node operator and are not included.
2) nodes below 0.10 are counted as so called "un-maintained" even though they also can be a choice of the node operator.

Satoshi:0.9.3, 105
Satoshi:0.8.6, 74
Satoshi:0.9.1, 49
Satoshi:0.9.2.1, 42
Satoshi:0.8.5, 39
Satoshi:0.8.1, 35
Satoshi:0.9.5, 14
Satoshi:0.8.3, 12
Satoshi:0.9.4, 10
Satoshi:0.9.99, 10
Satoshi:0.9.0, 5
Satoshi:0.9.2, 5
Satoshi:0.8.0, 4
Satoshi:0.8.99, 1
Satoshi:0.8.4, 1

There are 406 nodes total that falls under the un-maintained category, which is below 10% of the network.
Luke also have some data here that shows similar results. http://luke.dashjr.org/programs/bitcoin/files/charts/versions.txt

> The network could shrink by 60% and it would still have plenty of open connection slots

I'm afraid we have to agree to disagree if you think dropping support for 60% of the nodes on the network when rolling out an upgrade is the sane default.

> People are committing to spinning up thousands of supports-2mb-nodes during the grace period.

thousands of nodes?! where did you get this figure? who are these people? Please elaborate.

> We could wait a year and pick up maybe 10 or 20% more.

I don't understand this statement at all, please explicate.

-- 
Yifu Guo
"Life is an everlasting self-improvement."

On Sat, Feb 6, 2016 at 10:37 AM, Gavin Andresen via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
Responding to "28 days is not long enough" :

I keep seeing this claim made with no evidence to back it up.  As I said, I surveyed several of the biggest infrastructure providers and the btcd lead developer and they all agree "28 days is plenty of time."

For individuals... why would it take somebody longer than 28 days to either download and restart their bitcoind, or to patch and then re-run (the patch can be a one-line change MAX_BLOCK_SIZE from 1000000 to 2000000)?

For the Bitcoin Core project:  I'm well aware of how long it takes to roll out new binaries, and 28 days is plenty of time.

I suspect there ARE a significant percentage of un-maintained full nodes-- probably 30 to 40%. Losing those nodes will not be a problem, for three reasons:
1) The network could shrink by 60% and it would still have plenty of open connection slots
2) People are committing to spinning up thousands of supports-2mb-nodes during the grace period.
3) We could wait a year and pick up maybe 10 or 20% more.

I strongly disagree with the statement that there is no cost to a longer grace period. There is broad agreement that a capacity increase is needed NOW.

To bring it back to bitcoin-dev territory:  are there any TECHNICAL arguments why an upgrade would take a business or individual longer than 28 days?


Responding to Luke's message:

On Sat, Feb 6, 2016 at 1:12 AM, Luke Dashjr via bitcoin-dev
<bitcoin-dev@lists.linuxfoundation.org> wrote:
> On Friday, February 05, 2016 8:51:08 PM Gavin Andresen via bitcoin-dev wrote:
>> Blog post on a couple of the constants chosen:
>>   http://gavinandresen.ninja/seventyfive-twentyeight
>
> Can you put this in the BIP's Rationale section (which appears to be mis-named
> "Discussion" in the current draft)?

I'll rename the section and expand it a little. I think standards documents like BIPs should be concise, though (written for implementors), so I'm not going to recreate the entire blog post there.
 
>
>> Signature operations in un-executed branches of a Script are not counted
>> OP_CHECKMULTISIG evaluations are counted accurately; if the signature for a
>> 1-of-20 OP_CHECKMULTISIG is satisified by the public key nearest the top
>> of the execution stack, it is counted as one signature operation. If it is
>> satisfied by the public key nearest the bottom of the execution stack, it
>> is counted as twenty signature operations. Signature operations involving
>> invalidly encoded signatures or public keys are not counted towards the
>> limit
>
> These seem like they will break static analysis entirely. That was a noted
> reason for creating BIP 16 to replace BIP 12. Is it no longer a concern? Would
> it make sense to require scripts to commit to the total accurate-sigop count
> to fix this?

After implementing static counting and accurate counting... I was wrong. Accurate/dynamic counting/limiting is quick and simple and can be completely safe (the counting code can be told the limit and can "early-out" validation).

I think making scripts commit to a total accurate sigop count is a bad idea-- it would make multisignature signing more complicated for zero benefit.  E.g. if you're circulating a partially signed transaction to that must be signed by 2 of 5 people, you can end up with a transaction that requires 2, 3, 4, or 5 signature operations to validate (depending on which public keys are used to do the signing).  The first signer might have no idea who else would sign and wouldn't know the accurate sigop count.
 
>
>> The amount of data hashed to compute signature hashes is limited to
>> 1,300,000,000 bytes per block.
>
> The rationale for this wasn't in your blog post. I assume it's based on the
> current theoretical max at 1 MB blocks? Even a high-end PC would probably take
> 40-80 seconds just for the hashing, however - maybe a lower limit would be
> best?

It is slightly more hashing than was required to validate block number 364,422.

There are a couple of advantages to a very high limit:

1) When the fork is over, special-case code for dealing with old blocks can be eliminated, because all old blocks satisfy the new limit.

2) More importantly, if the limit is small enough it might get hit by standard transactions, then block creation code (CreateNewBlock() / getblocktemplate / or some external transaction-assembling software) will have to solve an even more complicated bin-packing problem to optimize for fees paid.

In practice, the 20,000 sigop limit will always be reached before MAX_BLOCK_SIGHASH.

 
>
>> Miners express their support for this BIP by ...
>
> But miners don't get to decide hardforks. How does the economy express their
> support for it? What happens if miners trigger it without consent from the
> economy?

"The economy" does support this.

 
>
> If you are intent on using the version bits to trigger the hardfork, I suggest
> rephrasing this such that miners should only enable the bit when they have
> independently confirmed economic support (this means implementations need a
> config option that defaults to off).

Happy to add words about economic majority.

Classic will not implement a command-line option (the act of running Classic is "I opt in"), but happy to add one for a pull request to Core, assuming Core would not see such a pull request as having any hostile intent.


>
>> SPV (simple payment validation) wallets are compatible with this change.
>
> Would prefer if this is corrected to "Light clients" or something. Actual SPV
> wallets do not exist at this time, and would not be compatible with a
> hardfork.

Is there an explanation of SPV versus "Light Client" written somewhere more permanent than a reddit comment or forum post that I can point to?
 
>
>> In the short term, an increase is needed to continue the current economic
>> policies with regards to fees and block space, matching market expectations
>> and preventing market disruption.
>
> IMO this sentence is the most controversial part of your draft, and it
> wouldn't suffer a loss to remove it (or at least make it subjective).

Happy to remove.
 
> I would also prefer to see any hardfork:
>
> 1. Address at least the simple tasks on the hardfork wishlist (eg, enable some
>    disabled opcodes; fix P2SH for N-of->15 multisig; etc).

Those would be separate BIPs. (according to BIP 1, smaller is better)

After this 2MB bump, I agree we need to agree on a process for the next hard fork to avoid all of the unnecessary drama.

> 2. Be deployed as a soft-hardfork so as not to leave old nodes entirely
>    insecure.

I haven't been paying attention to all of the "soft-hardfork/hard-softfork/etc" terminology so have no idea what you mean. Is THAT written up somewhere?

--
--
Gavin Andresen


_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev