I’m confident that we could work with the miners who we have good relationships with to start including the root hash of the (lagging) UTXO set in their coinbase transactions, in order to begin transforming this idea into reality. We could also issue regular transactions from “semi-trusted” addresses controlled by known people that include the same root hash in an OP_RETURN output, which would allow cross-checking against the miners’ UTXO commitments, as part of this initial “prototype” system.
This would "get the ball rolling" on UTXO commitments in a permissionless way (no one can stop us from doing this). If the results from this prototype commitment scheme were positive, then perhaps there would be support from the community and miners to enforce a new rule which requires the (lagging) root hashes be included in new blocks. At that point, the UTXO commitment scheme is no longer a prototype but a trusted feature of the Bitcoin network.
On that topic, are there any existing proposals detailing a canonical ordering of the UTXO set and a scheme to calculate the root hash?
Best regards,
Peter
What about periodically committing the entire UTXO set to a special checkpoint block which becomes the new de facto Genesis block?
Daniele
------------------------------Message: 5Date: Wed, 29 Mar 2017 16:41:29 +0000From: Andrew Johnson <andrew.johnson83@gmail.com>To: David Vorick <david.vorick@gmail.com>Cc: Bitcoin Dev <bitcoin-dev@lists.linuxfoundation.org>Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meetingMessage-ID: <CAAy62_+JtoAuM-RsrAAp5eiGiO+OHLDjzqgbnF2De7TUU7TyYg@mail.gmail.com>Content-Type: text/plain; charset="utf-8"I believe that as we continue to add users to the system by scalingcapacity that we will see more new nodes appear, but I'm at a bit of a lossas to how to empirically prove it.I do see your point on increasing load on archival nodes, but the majorityof that load is going to come from new nodes coming online, they're theonly ones going after very old blocks. I could see that as a potentialattack vector, overwhelm the archival nodes by spinning up new nodesconstantly, therefore making it difficult for a "real" new node to get upto speed in a reasonable amount of time.Perhaps the answer there would be a way to pay an archival node a smallamount of bitcoin in order to retrieve blocks older than a certain cutoff?Include an IP address for the node asking for the data as metadata in thetransaction... Archival nodes could set and publish their own policy, letthe market decide what those older blocks are worth. Would also help toincentivize running archival node, which we do need. Of course, this isn'tvery user friendly.We can take this to bitcoin-discuss, if we're getting too far off topic.On Wed, Mar 29, 2017 at 11:25 AM David Vorick <david.vorick@gmail.com>wrote:>> On Mar 29, 2017 12:20 PM, "Andrew Johnson" <andrew.johnson83@gmail.com>> wrote:>> What's stopping these users from running a pruned node? Not every node> needs to store a complete copy of the blockchain.>>> Pruned nodes are not the default configuration, if it was the default> configuration then I think you would see far more users running a pruned> node.>> But that would also substantially increase the burden on archive nodes.>>> Further discussion about disk space requirements should be taken to> another thread.>>> --Andrew Johnson-------------- next part --------------An HTML attachment was scrubbed...URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20170329/9b48ebe3/attachment.html>------------------------------
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.orghttps://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev