Hi Tim, answers inline:

> Your observation is right: If each participant has a distinct b_i as
you've sketched, then the duplicate checks can be omitted. And I tend
to agree that this is more natural scheme. In fact, this is where we
started, and we had a security proof of this (though without tweaking
worked out) in an earlier unpublished draft of the paper, and only
afterwards we found a scheme with a single b.

I see. Funnily enough, the day after I posted "my idea" I saw that ~ the same thing exists in the original FROST paper!

> The reason we why prefer the scheme with a single b is simply
efficiency. The current signing protocol needs 3 group exponentiations
(i.e., scalar-point multiplications). With separate b values, one of
those becomes a multi-exponentiation of size n-1, which is much slower
and needs O(n/log n) time instead of O(1).

OK. I can see where the count of 3 comes from and, indeed, we would need a multiexponentiation in signing, here. But, we already need one in verifying.
So we'd be going from from O(1) sign, O(n/logn) verify to O(n/logn) sign, O(n/logn) verify. As per table 1, the only one that's better than that while being unrestricted is BN06, but that isn't a 2 round scheme.

My initial reaction would be, since it's not worsening the scaling of the verifier, does it matter? And *if* this were only for Bitcoin, then also because n is limited in that context with some upper bound (in the hundreds I think). A multiexp in signing for 100 inputs with n/logn scaling seems like it wouldn't be an issue since, after all, we are assuming we can do it in transaction verification. Signers being more constrained than verifiers doesn't seem realistic; am I missing something there? (hardware signing devices perhaps? is that an actual concern, given signers have so much more time than verifiers? perhaps Lightning-like protocols (though not LN itself I think)?)

The scheme is explicitly not limited to Bitcoin, nor blockchains, though, so there's that; is that relevant here?

> And yes, the uniqueness check looks a bit strange at first glance, but
(as the proof shows) there should be nothing wrong with it. One could
argue that the uniqueness check is a potential footgun in practice
because an implementation could omit it by accident, and would still
"work" and produce signatures. But we find this not really convincing
because it's possible to create a failing test vector for this case.

Yes, the footgun you point to in Section 4.2 is the very plausible one, but indeed, a test vector could alleviate that.

Yes, I have no concrete suggestion for now as to how this style of algorithm with comparative checking instead of being algebraic may cause a problem; I just find it suspicious ... but, to continue on that thought;

> We didn't talk about identifying disruptive participants in the paper
at all, but one could also argue that the uniqueness check creates a
problem if the honest participants want to figure out who disrupted a
session: if malicious participant i copies from honest participant j,
then how the others tell who of them to exclude? 
> But if you think about it, that's not a real issue. In any case,
identifying disruptive participants will work reliably only if the
coordinator is honest, so let's assume this. And then, if additionally
the participants have secure channels to the coordinator, then no
malicious can steal the R_{2,j} of an honest participant j. So, if the
coordinator sees that R_{2,i} = R_{2,j}, the right conclusion is that
they are colluding and both malicious.

Yes, those are some interesting points to consider. On one detail: "In any case, identifying disruptive participants will work reliably only if the
coordinator is honest, so let's assume this." -- this could also be addressed with proofs of knowledge, no?

Anyway, for me it was more a sort of preference for purely algebraic algorithms. It's a little fanciful, but algebraic algorithms are easier to encode in circuits in zero knowledge (though things like equality checks are entirely doable ofc!) and maybe easier to "encode" into modular schemes that use them as a building block. Maybe. Less conditional branches / loops to traverse in the code? And/or you could wave your hands and just say they "feel cleaner", or are easier to reason about.

As for finding a concrete problem with the equality checks, I have not. At least not yet.

Cheers,
AdamISZ/waxwing


--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/182e01b0-30f0-4dec-b4bb-5057bd4ef89fn%40googlegroups.com.