From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp2.osuosl.org (smtp2.osuosl.org [IPv6:2605:bc80:3010::133]) by lists.linuxfoundation.org (Postfix) with ESMTP id B84ABC000B for ; Tue, 22 Mar 2022 15:08:49 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 9676840B15 for ; Tue, 22 Mar 2022 15:08:49 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org X-Spam-Flag: NO X-Spam-Score: -1.899 X-Spam-Level: X-Spam-Status: No, score=-1.899 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no Authentication-Results: smtp2.osuosl.org (amavisd-new); dkim=pass (2048-bit key) header.d=blockstream-com.20210112.gappssmtp.com Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 2yTJ5Efyv8Xy for ; Tue, 22 Mar 2022 15:08:46 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.8.0 Received: from mail-qt1-x835.google.com (mail-qt1-x835.google.com [IPv6:2607:f8b0:4864:20::835]) by smtp2.osuosl.org (Postfix) with ESMTPS id 7543F40B23 for ; Tue, 22 Mar 2022 15:08:46 +0000 (UTC) Received: by mail-qt1-x835.google.com with SMTP id v2so14647936qtc.5 for ; Tue, 22 Mar 2022 08:08:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=blockstream-com.20210112.gappssmtp.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to; bh=9weqdycT6e8BCTSIJ4khfTnDofB0GPQzvxKRU79jmuw=; b=HlG5YXZE7TRuZEYNPTBORpYahvLWNaU9UVHRT0eeHvdoJO+RecpNpWtGVgkAeRbkql ch3azTrCL0ZXwEAEln8uwyjb5LvC8JkOVQxrfCX4KhK74DAheLNb2yFCZEOPVloeL5rq CX2P4u8fIk7DbsJ9Do2nmKG5/hhwIDpi0nR8nPQth3L5vgrrfjibUZD1hv5rlmIVfrWB o5w2ZEa4d/YOxTzYRV7xKEUapjMFlIA9d8K0O9aHv8O24CUTdcDTk84zAzMs2JBnrDa/ JAtHYUshxRFDVcM9I/ebFt1bMBW0Ezt5DTsAcRFFdZ6Q09opF00lUin04kQHDJVLqPRl G2rQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=9weqdycT6e8BCTSIJ4khfTnDofB0GPQzvxKRU79jmuw=; b=MlDKuru0A3G/nNyxRE6HSwi1QWxy37B5TlWeBSPau26QotUqhkjZ5Z9ZYYBR3a6Wqw h8kx5hys9D7fJLRRwPdxRfhGbOSmZGg+cZqZaRy2lLYDAO3IzUtS5U74JnFjgiY1c8do nGRns5LuO5SBvXarM7nRdwJg4Q+XxoOwTdGC3qYu7GKxZ1jjUSrqrR4KahMnnUhKLrDX CW/NN56l32MjAkRADcKK3LnGqUEHBXvf8W1tk7t5KVyh12NGAMYoZN2gm5BUUcrflzL7 jdZXSobydDZHtnyGz7fB5ZzmI17xBKB1MzGy1MZky3x+iWnf3QZEzVe9zbOoUw4VBmuN UB+g== X-Gm-Message-State: AOAM531GObVisqS7nWIHa7Ggi025FfnY2KGx1BFKJQvYKwcVM7YWC9Ur QC3V9b9AyD6DElWHes9sZ2ESBiCJ2z6YS1MCa2JLliZ3Dak= X-Google-Smtp-Source: ABdhPJxzxNe9972r1HtR9pPNdYjyd1wV/+lppfwLotfKPLqllfSej5lSpqBgh/W3+EEZCjm+jKwmj8FIDQWnU1e24vU= X-Received: by 2002:ac8:580d:0:b0:2e1:c641:8c21 with SMTP id g13-20020ac8580d000000b002e1c6418c21mr20591538qtg.677.1647961725029; Tue, 22 Mar 2022 08:08:45 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: "Russell O'Connor" Date: Tue, 22 Mar 2022 11:08:33 -0400 Message-ID: To: ZmnSCPxj , Bitcoin Protocol Discussion Content-Type: multipart/alternative; boundary="00000000000039600705dacffd80" Subject: Re: [bitcoin-dev] Beyond Jets: Microcode: Consensus-Critical Jets Without Softforks X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Bitcoin Protocol Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Mar 2022 15:08:49 -0000 --00000000000039600705dacffd80 Content-Type: text/plain; charset="UTF-8" Setting aside my thoughts that something like Simplicity would make a better platform than Bitcoin Script (due to expression operating on a more narrow interface than the entire stack (I'm looking at you OP_DEPTH)) there is an issue with namespace management. If I understand correctly, your implication was that once opcodes are redefined by an OP_RETURN transaction, subsequent transactions of that opcode refer to the new microtransaction. But then we have a race condition between people submitting transactions expecting the outputs to refer to the old code and having their code redefined by the time they do get confirmed (or worse having them reorged). I've partially addressed this issue in my Simplicity design where the commitment of a Simplicity program in a scriptpubkey covers the hash of the specification of the jets used, which makes commits unambiguously to the semantics (rightly or wrongly). But the issue resurfaces at redemption time where I (currently) have a consensus critical map of codes to jets that is used to decode the witness data into a Simplicity program. If one were to allow this map of codes to jets to be replaced (rather than just extended) then it would cause redemption to fail, because the hash of the new jets would no longer match the hash of the jets appearing the the input's scriptpubkey commitment. While this is still not good and I don't recommend it, it is probably better than letting the semantics of your programs be changed out from under you. This comment is not meant as an endorsement of ths idea, which is a little bit out there, at least as far as Bitcoin is concerned. :) My long term plans are to move this consensus critical map of codes out of the consensus layer and into the p2p layer where peers can negotiate their own encodings between each other. But that plan is also a little bit out there, and it still doesn't solve the issue of how to weight reused jets, where weight is still consensus critical. On Tue, Mar 22, 2022 at 1:37 AM ZmnSCPxj via bitcoin-dev < bitcoin-dev@lists.linuxfoundation.org> wrote: > Good morning list, > > It is entirely possible that I have gotten into the deep end and am now > drowning in insanity, but here goes.... > > Subject: Beyond Jets: Microcode: Consensus-Critical Jets Without Softforks > > Introduction > ============ > > Recent (Early 2022) discussions on the bitcoin-dev mailing > list have largely focused on new constructs that enable new > functionality. > > One general idea can be summarized this way: > > * We should provide a very general language. > * Then later, once we have learned how to use this language, > we can softfork in new opcodes that compress sections of > programs written in this general language. > > There are two arguments against this style: > > 1. One of the most powerful arguments the "general" side of > the "general v specific" debate is that softforks are > painful because people are going to keep reiterating the > activation parameters debate in a memoryless process, so > we want to keep the number of softforks low. > * So, we should just provide a very general language and > never softfork in any other change ever again. > 2. One of the most powerful arguments the "general" side of > the "general v specific" debate is that softforks are > painful because people are going to keep reiterating the > activation parameters debate in a memoryless process, so > we want to keep the number of softforks low. > * So, we should just skip over the initial very general > language and individually activate small, specific > constructs, reducing the needed softforks by one. > > By taking a page from microprocessor design, it seems to me > that we can use the same above general idea (a general base > language where we later "bless" some sequence of operations) > while avoiding some of the arguments against it. > > Digression: Microcodes In CISC Microprocessors > ---------------------------------------------- > > In the 1980s and 1990s, two competing microprocessor design > paradigms arose: > > * Complex Instruction Set Computing (CISC) > - Few registers, many addressing/indexing modes, variable > instruction length, many obscure instructions. > * Reduced Instruction Set Computing (RISC) > - Many registers, usually only immediate and indexed > addressing modes, fixed instruction length, few > instructions. > > In CISC, the microprocessor provides very application-specific > instructions, often with a small number of registers with > specific uses. > The instruction set was complicated, and often required > multiple specific circuits for each application-specific > instruction. > Instructions had varying sizes and varying number of cycles. > > In RISC, the micrprocessor provides fewer instructions, and > programmers (or compilers) are supposed to generate the code > for all application-specific needs. > The processor provided large register banks which could be > used very generically and interchangeably. > Instructions had the same size and every instruction took a > fixed number of cycles. > > In CISC you usually had shorter code which could be written > by human programmers in assembly language or machine language. > In RISC, you generally had longer code, often difficult for > human programmers to write, and you *needed* a compiler to > generate it (unless you were very careful, or insane enough > you could scroll over multiple pages of instructions without > becoming more insane), or else you might forget about stuff > like jump slots. > > For the most part, RISC lost, since most modern processors > today are x86 or x86-64, an instruction set with varying > instruction sizes, varying number of cycles per instruction, > and complex instructions with application-specific uses. > > Or at least, it *looks like* RISC lost. > In the 90s, Intel was struggling since their big beefy CISC > designs were becoming too complicated. > Bugs got past testing and into mass-produced silicon. > RISC processors were beating the pants off 386s in terms of > raw number of computations per second. > > RISC processors had the major advantage that they were > inherently simpler, due to having fewer specific circuits > and filling up their silicon with general-purpose registers > (which are large but very simple circuits) to compensate. > This meant that processor designers could fit more of the > design in their merely human meat brains, and were less > likely to make mistakes. > The fixed number of cycles per instruction made it trivial > to create a fixed-length pipeline for instruction processing, > and practical RISC processors could deliver one instruction > per clock cycle. > Worse, the simplicity of RISC meant that smaller and less > experienced teams could produce viable competitors to the > Intel x86s. > > So what Intel did was to use a RISC processor, and add a > special Instruction Decoder unit. > The Instruction Decoder would take the CISC instruction > stream accepted by classic Intel x86 processors, and emit > RISC instructions for the internal RISC processor. > CISC instructions might be variable length and have variable > number of cycles, but the emitted RISC instructions were > individually fixed length and fixed number of cycles. > A CISC instruction might be equivalent to a single RISC > instruction, or several. > > With this technique, Intel could deliver performance > approaching their RISC-only competition, while retaining > back-compatibility with existing software written for their > classic CISC processors. > > At its core, the Instruction Decoder was a table-driven > parser. > This lookup table could be stored into on-chip flash memory. > This had the advantage that the on-chip flash memory could be > updated in case of bugs in the implementation of CISC > instructions. > This on-chip flash memory was then termed "microcode". > > Important advantages of this "microcode" technique were: > > * Back-compatibility with existing instruction sets. > * Easier and more scalable underlying design due to ability > to use RISC techniques while still supporting CISC instruction > sets. > * Possible to fix bugs in implementations of complex CISC > instructions by uploading new microcode. > > (Obviously I have elided a bunch of stuff, but the above > rough sketch should be sufficient as introduction.) > > Bitcoin Consensus Layer As Hardware > ----------------------------------- > > While Bitcoin fullnode implementations are software, because > of the need for consensus, this software is not actually very > "soft". > One can consider that, just as it would take a long time for > new hardware to be designed with a changed instruction set, > it is similarly taking a long time to change Bitcoin to > support changed feature sets. > > Thus, we should really consider the Bitcoin consensus layer, > and its SCRIPT, as hardware that other Bitcoin software and > layers run on top of. > > This thus opens up the thought of using techniques that were > useful in hardware design. > Such as microcode: a translation layer from "old" instruction > sets to "new" instruction sets, with the ability to modify this > mapping. > > Microcode For Bitcoin SCRIPT > ============================ > > I propose: > > * Define a generic, low-level language (the "RISC language"). > * Define a mapping from a specific, high-level language to > the above language (the microcode). > * Allow users to sacrifice Bitcoins to define a new microcode. > * Have users indicate the microcode they wish to use to > interpret their Tapscripts. > > As a concrete example, let us consider the current Bitcoin > SCRIPT as the "CISC" language. > > We can then support a "RISC" language that is composed of > general instructions, such as arithmetic, SECP256K1 scalar > and point math, bytevector concatenation, sha256 midstates, > bytevector bit manipulation, transaction introspection, and > so on. > This "RISC" language would also be stack-based. > As the "RISC" language would have more possible opcodes, > we may need to use 2-byte opcodes for the "RISC" language > instead of 1-byte opcodes. > Let us call this "RISC" language the micro-opcode language. > > Then, the "microcode" simply maps the existing Bitcoin > SCRIPT `OP_` codes to one or more `UOP_` micro-opcodes. > > An interesting fact is that stack-based languages have > automatic referential transparency; that is, if I define > some new word in a stack-based language and use that word, > I can replace verbatim the text of the new word in that > place without issue. > Compare this to a language like C, where macro authors > have to be very careful about inadvertent variable > capture, wrapping `do { ... } while(0)` to avoid problems > with `if` and multiple statements, multiple execution, and > so on. > > Thus, a sequence of `OP_` opcodes can be mapped to a > sequence of equivalent `UOP_` micro-opcodes without > changing the interpretation of the source language, an > important property when considering such a "compiled" > language. > > We start with a default microcode which is equivalent > to the current Bitcoin language. > When users want to define a new microcode to implement > new `OP_` codes or change existing `OP_` codes, they > can refer to a "base" microcode, and only have to > provide the new mappings. > > A microcode is fundamentally just a mapping from an > `OP_` code to a variable-length sequence of `UOP_` > micro-opcodes. > > ```Haskell > import Data.Map > -- type Opcode > -- type UOpcode > newtype Microcode = Microcode (Map.Map Opcode [UOpcode]) > ``` > > Semantically, the SCRIPT interpreter processes `UOP_` > micro-opcodes. > > ```Haskell > -- instance Monad Interpreter -- can `fail`. > interpreter :: Transaction -> TxInput -> [UOpcode] -> Interpreter () > ``` > > Example > ------- > > Suppose a user wants to re-enable `OP_CAT`, and nothing > else. > > That user creates a microcode, referring to the current > default Bitcoin SCRIPT microcode as the "base". > The base microcode defines `OP_CAT` as equal to the > sequence `UOP_FAIL` i.e. a micro-opcode that always fails. > However, the new microcode will instead redefine the > `OP_CAT` as the micro-opcode sequence `UOP_CAT`. > > Microcodes then have a standard way of being represented > as a byte sequence. > The user serializes their new microcode as a byte > sequence. > > Then, the user creates a new transaction where one of > the outputs contains, say, 1.0 Bitcoins (exact required > value TBD), and has the `scriptPubKey` of > `OP_TRUE OP_RETURN `. > This output is a "microcode introduction output", which > is provably unspendable, thus burning the Bitcoins. > > (It need not be a single user, multiple users can > coordinate by signing a single transaction that commits > their funds to the microcode introduction.) > > Once the above transaction has been deeply confirmed, > the user can then take the hash of the microcode > serialization. > Then the user can use a SCRIPT with `OP_CAT` enabled, > by using a Tapscript with, say, version `0xce`, and > with the SCRIPT having the microcode hash as its first > bytes, followed by the `OP_` codes. > > Fullnodes will then process recognized microcode > introduction outputs and store mappings from their > hashes to the microcodes in a new microcodes index. > Fullnodes can then process version-`0xce` Tapscripts > by checking if the microcodes index has the indicated > microcode hash. > > Semantically, fullnodes take the SCRIPT, and for each > `OP_` code in it, expands it to a sequence of `UOP_` > micro-opcodes, then concatenates each such sequence. > Then, the SCRIPT interpreter operates over a sequence > of `UOP_` micro-opcodes. > > Optimizing Microcodes > --------------------- > > Suppose there is some new microcode that users have > published onchain. > > We want to be able to execute the defined microcode > faster than expanding an `OP_`-code SCRIPT to a > `UOP_`-code SCRIPT and having an interpreter loop > over the `UOP_`-code SCRIPT. > > We can use LLVM. > > WARNING: LLVM might not be appropriate for > network-facing security-sensitive applications. > In particular, LLVM bugs. especially nondeterminism > bugs, can lead to consensus divergence and disastrous > chainsplits! > On the other hand, LLVM bugs are compiler bugs and > the same bugs can hit the static compiler `cc`, too, > since the same LLVM code runs in both JIT and static > compilation, so this risk already exists for Bitcoin. > (i.e. we already rely on LLVM not being buggy enough > to trigger Bitcoin consensus divergence, else we would > have written Bitcoin Core SCRIPT interpreter in > assembly.) > > Each `UOP_`-code has an equivalent tree of LLVM code. > For each `Opcode` in the microcode, we take its > sequence of `UOpcode`s and expand them to this tree, > concatenating the equivalent trees for each `UOpcode` > in the sequence. > Then we ask LLVM to JIT-compile this code to a new > function, running LLVM-provided optimizers. > Then we put a pointer to this compiled function to a > 256-long array of functions, where the array index is > the `OP_` code. > > The SCRIPT interpreter then simply iterates over the > `OP_` code SCRIPT and calls each of the JIT-compiled > functions. > This reduces much of the overhead of the `UOP_` layer > and makes it approach the current performance of the > existing `OP_` interpreter. > > For the default Bitcoin SCRIPT, the opcodes array > contains pointers to statically-compiled functions. > A microcode that is based on the default Bitcoin > SCRIPT copies this opcodes array, then overwrites > the entries. > > Future versions of Bitcoin Core can "bless" > particular microcodes by providing statically-compiled > functions for those microcodes. > This leads to even better performance (there is > no need to recompile ancient onchain microcodes each > time Bitcoin Core starts) without any consensus > divergence. > It is a pure optimization and does not imply a > tightening of rules, and is thus not a softfork. > > (To reduce the chance of network faults being used > to poke into `W|X` memory (since `W|X` memory is > needed in order to actually JIT compile) we can > isolate the SCRIPT interpreter into its own process > separate from the network-facing code. > This does imply additional overhead in serializing > transactions we want to ask the SCRIPT interpreter > to validate.) > > Comparison To Jets > ------------------ > > This technique allows users to define "jets", i.e. > sequences of low-level general operations that users > have determined are common enough they should just > be implemented as faster code that is executed > directly by the underlying hardware processor rather > than via a software interpreter. > Basically, each redefined `OP_` code is a jet of a > sequence of `UOP_` micro-opcodes. > > We implement this by dynamically JIT-compiling the > proposed jets, as described above. > SCRIPTs using jetted code remain smaller, as the > jet definition is done in a previous transaction and > does not require copy-pasta (Do Not Repeat Yourself!). > At the same time, jettification is not tied to > developers, thus removing the need to keep softforking > new features --- we only need define a sufficiently > general language and then we can implement pretty much > anything worth implementing (and a bunch of other things > that should not be implemented, but hey, users gonna > use...). > > Bugs in existing microcodes can be fixed by basing a > new microcode from the existing microcode, and > redefining the buggy implementation. > Existing Tapscripts need to be re-spent to point to > the new bugfixed microcode, but if you used the > point-spend branch as an N-of-N of all participants > you have an upgrade mechanism for free. > > In order to ensure that the JIT-compilation of new > microcodes is not triggered trivially, we require > that users petitioning for the jettification of some > operations (i.e. introducing a new microcode) must > sacrifice Bitcoins. > > Burning Bitcoins is better than increasing the weight > of microcode introduction outputs; all fullnodes are > affected by the need to JIT-compile the new microcode, > so they benefit from the reduction in supply, thus > getting compensated for the work of JIT-compiling the > new microcode. > Ohter mechanisms for making microcode introduction > outputs expensive are also possible. > > Nothing really requires that we use a stack-based > language for this; any sufficiently FP language > should allow referential transparency. > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > --00000000000039600705dacffd80 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Setting aside my thoughts that something like Simplic= ity would make a better platform than Bitcoin Script (due to expression ope= rating on a more narrow interface than the entire stack (I'm looking at= you OP_DEPTH)) there is an issue with namespace management.

=
If I understand correctly, your implication was that once opcode= s are redefined by an OP_RETURN transaction, subsequent transactions of tha= t opcode refer to the new microtransaction.=C2=A0 But then we have a race c= ondition between people submitting transactions expecting the outputs to re= fer to the old code and having their code redefined by the time they do get= confirmed=C2=A0 (or worse having them reorged).

I've partially addressed this issue in my Simplicity design where th= e commitment of a Simplicity program in a scriptpubkey covers the hash of t= he specification of the jets used, which makes commits unambiguously to the= semantics (rightly or wrongly).=C2=A0 But the issue resurfaces at redempti= on time where I (currently) have a consensus critical map of codes to jets = that is used to decode the witness data into a Simplicity program.=C2=A0 If= one were to allow this map of codes to jets to be replaced (rather than ju= st extended) then it would cause redemption to fail, because the hash of th= e new jets would no longer match the hash of the jets appearing the the inp= ut's scriptpubkey commitment.=C2=A0 While this is still not good and I = don't recommend it, it is probably better than letting the semantics of= your programs be changed out from under you.

This= comment is not meant as an endorsement of ths idea, which is a little bit = out there, at least as far as Bitcoin is concerned. :)

My long term plans are to move this consensus critical map of code= s out of the consensus layer and into the p2p layer where peers can negotia= te their own encodings between each other.=C2=A0 But that plan is also a li= ttle bit out there, and it still doesn't solve the issue of how to weig= ht reused jets, where weight is still consensus critical.
On Tue, = Mar 22, 2022 at 1:37 AM ZmnSCPxj via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
Goo= d morning list,

It is entirely possible that I have gotten into the deep end and am now dro= wning in insanity, but here goes....

Subject: Beyond Jets: Microcode: Consensus-Critical Jets Without Softforks<= br>
Introduction
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D

Recent (Early 2022) discussions on the bitcoin-dev mailing
list have largely focused on new constructs that enable new
functionality.

One general idea can be summarized this way:

* We should provide a very general language.
=C2=A0 * Then later, once we have learned how to use this language,
=C2=A0 =C2=A0 we can softfork in new opcodes that compress sections of
=C2=A0 =C2=A0 programs written in this general language.

There are two arguments against this style:

1.=C2=A0 One of the most powerful arguments the "general" side of=
=C2=A0 =C2=A0 the "general v specific" debate is that softforks a= re
=C2=A0 =C2=A0 painful because people are going to keep reiterating the
=C2=A0 =C2=A0 activation parameters debate in a memoryless process, so
=C2=A0 =C2=A0 we want to keep the number of softforks low.
=C2=A0 =C2=A0 * So, we should just provide a very general language and
=C2=A0 =C2=A0 =C2=A0 never softfork in any other change ever again.
2.=C2=A0 One of the most powerful arguments the "general" side of=
=C2=A0 =C2=A0 the "general v specific" debate is that softforks a= re
=C2=A0 =C2=A0 painful because people are going to keep reiterating the
=C2=A0 =C2=A0 activation parameters debate in a memoryless process, so
=C2=A0 =C2=A0 we want to keep the number of softforks low.
=C2=A0 =C2=A0 * So, we should just skip over the initial very general
=C2=A0 =C2=A0 =C2=A0 language and individually activate small, specific
=C2=A0 =C2=A0 =C2=A0 constructs, reducing the needed softforks by one.

By taking a page from microprocessor design, it seems to me
that we can use the same above general idea (a general base
language where we later "bless" some sequence of operations)
while avoiding some of the arguments against it.

Digression: Microcodes In CISC Microprocessors
----------------------------------------------

In the 1980s and 1990s, two competing microprocessor design
paradigms arose:

* Complex Instruction Set Computing (CISC)
=C2=A0 - Few registers, many addressing/indexing modes, variable
=C2=A0 =C2=A0 instruction length, many obscure instructions.
* Reduced Instruction Set Computing (RISC)
=C2=A0 - Many registers, usually only immediate and indexed
=C2=A0 =C2=A0 addressing modes, fixed instruction length, few
=C2=A0 =C2=A0 instructions.

In CISC, the microprocessor provides very application-specific
instructions, often with a small number of registers with
specific uses.
The instruction set was complicated, and often required
multiple specific circuits for each application-specific
instruction.
Instructions had varying sizes and varying number of cycles.

In RISC, the micrprocessor provides fewer instructions, and
programmers (or compilers) are supposed to generate the code
for all application-specific needs.
The processor provided large register banks which could be
used very generically and interchangeably.
Instructions had the same size and every instruction took a
fixed number of cycles.

In CISC you usually had shorter code which could be written
by human programmers in assembly language or machine language.
In RISC, you generally had longer code, often difficult for
human programmers to write, and you *needed* a compiler to
generate it (unless you were very careful, or insane enough
you could scroll over multiple pages of instructions without
becoming more insane), or else you might forget about stuff
like jump slots.

For the most part, RISC lost, since most modern processors
today are x86 or x86-64, an instruction set with varying
instruction sizes, varying number of cycles per instruction,
and complex instructions with application-specific uses.

Or at least, it *looks like* RISC lost.
In the 90s, Intel was struggling since their big beefy CISC
designs were becoming too complicated.
Bugs got past testing and into mass-produced silicon.
RISC processors were beating the pants off 386s in terms of
raw number of computations per second.

RISC processors had the major advantage that they were
inherently simpler, due to having fewer specific circuits
and filling up their silicon with general-purpose registers
(which are large but very simple circuits) to compensate.
This meant that processor designers could fit more of the
design in their merely human meat brains, and were less
likely to make mistakes.
The fixed number of cycles per instruction made it trivial
to create a fixed-length pipeline for instruction processing,
and practical RISC processors could deliver one instruction
per clock cycle.
Worse, the simplicity of RISC meant that smaller and less
experienced teams could produce viable competitors to the
Intel x86s.

So what Intel did was to use a RISC processor, and add a
special Instruction Decoder unit.
The Instruction Decoder would take the CISC instruction
stream accepted by classic Intel x86 processors, and emit
RISC instructions for the internal RISC processor.
CISC instructions might be variable length and have variable
number of cycles, but the emitted RISC instructions were
individually fixed length and fixed number of cycles.
A CISC instruction might be equivalent to a single RISC
instruction, or several.

With this technique, Intel could deliver performance
approaching their RISC-only competition, while retaining
back-compatibility with existing software written for their
classic CISC processors.

At its core, the Instruction Decoder was a table-driven
parser.
This lookup table could be stored into on-chip flash memory.
This had the advantage that the on-chip flash memory could be
updated in case of bugs in the implementation of CISC
instructions.
This on-chip flash memory was then termed "microcode".

Important advantages of this "microcode" technique were:

* Back-compatibility with existing instruction sets.
* Easier and more scalable underlying design due to ability
=C2=A0 to use RISC techniques while still supporting CISC instruction
=C2=A0 sets.
* Possible to fix bugs in implementations of complex CISC
=C2=A0 instructions by uploading new microcode.

(Obviously I have elided a bunch of stuff, but the above
rough sketch should be sufficient as introduction.)

Bitcoin Consensus Layer As Hardware
-----------------------------------

While Bitcoin fullnode implementations are software, because
of the need for consensus, this software is not actually very
"soft".
One can consider that, just as it would take a long time for
new hardware to be designed with a changed instruction set,
it is similarly taking a long time to change Bitcoin to
support changed feature sets.

Thus, we should really consider the Bitcoin consensus layer,
and its SCRIPT, as hardware that other Bitcoin software and
layers run on top of.

This thus opens up the thought of using techniques that were
useful in hardware design.
Such as microcode: a translation layer from "old" instruction
sets to "new" instruction sets, with the ability to modify this mapping.

Microcode For Bitcoin SCRIPT
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D

I propose:

* Define a generic, low-level language (the "RISC language").
* Define a mapping from a specific, high-level language to
=C2=A0 the above language (the microcode).
* Allow users to sacrifice Bitcoins to define a new microcode.
* Have users indicate the microcode they wish to use to
=C2=A0 interpret their Tapscripts.

As a concrete example, let us consider the current Bitcoin
SCRIPT as the "CISC" language.

We can then support a "RISC" language that is composed of
general instructions, such as arithmetic, SECP256K1 scalar
and point math, bytevector concatenation, sha256 midstates,
bytevector bit manipulation, transaction introspection, and
so on.
This "RISC" language would also be stack-based.
As the "RISC" language would have more possible opcodes,
we may need to use 2-byte opcodes for the "RISC" language
instead of 1-byte opcodes.
Let us call this "RISC" language the micro-opcode language.

Then, the "microcode" simply maps the existing Bitcoin
SCRIPT `OP_` codes to one or more `UOP_` micro-opcodes.

An interesting fact is that stack-based languages have
automatic referential transparency; that is, if I define
some new word in a stack-based language and use that word,
I can replace verbatim the text of the new word in that
place without issue.
Compare this to a language like C, where macro authors
have to be very careful about inadvertent variable
capture, wrapping `do { ... } while(0)` to avoid problems
with `if` and multiple statements, multiple execution, and
so on.

Thus, a sequence of `OP_` opcodes can be mapped to a
sequence of equivalent `UOP_` micro-opcodes without
changing the interpretation of the source language, an
important property when considering such a "compiled"
language.

We start with a default microcode which is equivalent
to the current Bitcoin language.
When users want to define a new microcode to implement
new `OP_` codes or change existing `OP_` codes, they
can refer to a "base" microcode, and only have to
provide the new mappings.

A microcode is fundamentally just a mapping from an
`OP_` code to a variable-length sequence of `UOP_`
micro-opcodes.

```Haskell
import Data.Map
-- type Opcode
-- type UOpcode
newtype Microcode =3D Microcode (Map.Map Opcode [UOpcode])
```

Semantically, the SCRIPT interpreter processes `UOP_`
micro-opcodes.

```Haskell
-- instance Monad Interpreter -- can `fail`.
interpreter :: Transaction -> TxInput -> [UOpcode] -> Interpreter = ()
```

Example
-------

Suppose a user wants to re-enable `OP_CAT`, and nothing
else.

That user creates a microcode, referring to the current
default Bitcoin SCRIPT microcode as the "base".
The base microcode defines `OP_CAT` as equal to the
sequence `UOP_FAIL` i.e. a micro-opcode that always fails.
However, the new microcode will instead redefine the
`OP_CAT` as the micro-opcode sequence `UOP_CAT`.

Microcodes then have a standard way of being represented
as a byte sequence.
The user serializes their new microcode as a byte
sequence.

Then, the user creates a new transaction where one of
the outputs contains, say, 1.0 Bitcoins (exact required
value TBD), and has the `scriptPubKey` of
`OP_TRUE OP_RETURN <serialized_microcode>`.
This output is a "microcode introduction output", which
is provably unspendable, thus burning the Bitcoins.

(It need not be a single user, multiple users can
coordinate by signing a single transaction that commits
their funds to the microcode introduction.)

Once the above transaction has been deeply confirmed,
the user can then take the hash of the microcode
serialization.
Then the user can use a SCRIPT with `OP_CAT` enabled,
by using a Tapscript with, say, version `0xce`, and
with the SCRIPT having the microcode hash as its first
bytes, followed by the `OP_` codes.

Fullnodes will then process recognized microcode
introduction outputs and store mappings from their
hashes to the microcodes in a new microcodes index.
Fullnodes can then process version-`0xce` Tapscripts
by checking if the microcodes index has the indicated
microcode hash.

Semantically, fullnodes take the SCRIPT, and for each
`OP_` code in it, expands it to a sequence of `UOP_`
micro-opcodes, then concatenates each such sequence.
Then, the SCRIPT interpreter operates over a sequence
of `UOP_` micro-opcodes.

Optimizing Microcodes
---------------------

Suppose there is some new microcode that users have
published onchain.

We want to be able to execute the defined microcode
faster than expanding an `OP_`-code SCRIPT to a
`UOP_`-code SCRIPT and having an interpreter loop
over the `UOP_`-code SCRIPT.

We can use LLVM.

WARNING: LLVM might not be appropriate for
network-facing security-sensitive applications.
In particular, LLVM bugs. especially nondeterminism
bugs, can lead to consensus divergence and disastrous
chainsplits!
On the other hand, LLVM bugs are compiler bugs and
the same bugs can hit the static compiler `cc`, too,
since the same LLVM code runs in both JIT and static
compilation, so this risk already exists for Bitcoin.
(i.e. we already rely on LLVM not being buggy enough
to trigger Bitcoin consensus divergence, else we would
have written Bitcoin Core SCRIPT interpreter in
assembly.)

Each `UOP_`-code has an equivalent tree of LLVM code.
For each `Opcode` in the microcode, we take its
sequence of `UOpcode`s and expand them to this tree,
concatenating the equivalent trees for each `UOpcode`
in the sequence.
Then we ask LLVM to JIT-compile this code to a new
function, running LLVM-provided optimizers.
Then we put a pointer to this compiled function to a
256-long array of functions, where the array index is
the `OP_` code.

The SCRIPT interpreter then simply iterates over the
`OP_` code SCRIPT and calls each of the JIT-compiled
functions.
This reduces much of the overhead of the `UOP_` layer
and makes it approach the current performance of the
existing `OP_` interpreter.

For the default Bitcoin SCRIPT, the opcodes array
contains pointers to statically-compiled functions.
A microcode that is based on the default Bitcoin
SCRIPT copies this opcodes array, then overwrites
the entries.

Future versions of Bitcoin Core can "bless"
particular microcodes by providing statically-compiled
functions for those microcodes.
This leads to even better performance (there is
no need to recompile ancient onchain microcodes each
time Bitcoin Core starts) without any consensus
divergence.
It is a pure optimization and does not imply a
tightening of rules, and is thus not a softfork.

(To reduce the chance of network faults being used
to poke into `W|X` memory (since `W|X` memory is
needed in order to actually JIT compile) we can
isolate the SCRIPT interpreter into its own process
separate from the network-facing code.
This does imply additional overhead in serializing
transactions we want to ask the SCRIPT interpreter
to validate.)

Comparison To Jets
------------------

This technique allows users to define "jets", i.e.
sequences of low-level general operations that users
have determined are common enough they should just
be implemented as faster code that is executed
directly by the underlying hardware processor rather
than via a software interpreter.
Basically, each redefined `OP_` code is a jet of a
sequence of `UOP_` micro-opcodes.

We implement this by dynamically JIT-compiling the
proposed jets, as described above.
SCRIPTs using jetted code remain smaller, as the
jet definition is done in a previous transaction and
does not require copy-pasta (Do Not Repeat Yourself!).
At the same time, jettification is not tied to
developers, thus removing the need to keep softforking
new features --- we only need define a sufficiently
general language and then we can implement pretty much
anything worth implementing (and a bunch of other things
that should not be implemented, but hey, users gonna
use...).

Bugs in existing microcodes can be fixed by basing a
new microcode from the existing microcode, and
redefining the buggy implementation.
Existing Tapscripts need to be re-spent to point to
the new bugfixed microcode, but if you used the
point-spend branch as an N-of-N of all participants
you have an upgrade mechanism for free.

In order to ensure that the JIT-compilation of new
microcodes is not triggered trivially, we require
that users petitioning for the jettification of some
operations (i.e. introducing a new microcode) must
sacrifice Bitcoins.

Burning Bitcoins is better than increasing the weight
of microcode introduction outputs; all fullnodes are
affected by the need to JIT-compile the new microcode,
so they benefit from the reduction in supply, thus
getting compensated for the work of JIT-compiling the
new microcode.
Ohter mechanisms for making microcode introduction
outputs expensive are also possible.

Nothing really requires that we use a stack-based
language for this; any sufficiently FP language
should allow referential transparency.
_______________________________________________
bitcoin-dev mailing list
= bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mail= man/listinfo/bitcoin-dev
--00000000000039600705dacffd80--