Hello nothingmuch, Sjors, list, Thanks nothingmuch for the writeup on coinjoins with coordinators. This general topic is rarely covered and while people like me know about it, we (well, I) are/am too lazy to get into the details of what kinds of problems exist. I think there are two distinct categories of weakness here: 1/ the ability of the coordinator to Sybil a targeted user by *not* including other unknown-to-coordinator entities in the join. This can be done by blocking access of those other entities, and/or Sybilling by adding their own entities. This first weakness is absolutely fundamental for all participants *except* the coordinator; you can't code/algorithm/crypto your way around it. Justification of that: the essence of this coordination is that it must be anonymous for the participants, that is the whole point. Therefore the ability to distinguish between Sybils and non-Sybils cannot exist, in pure form. However: The weakness is ameliorated, but not removed, by using decentralization of having oneself be the coordinator for a join. I say "not removed" because the Sybil risk still exists, but the bar is set much higher for the attacker, since they have to Sybil the whole ecosystem, i.e. they have no control over who else takes part. It is also ameliorated, but not removed, by cost imposition (see Joinmarket fidelity bonds e.g.). What's clear is that this risk is far worse with a static central coordinator for all joins rather than the "each new participant coordinates" model. Also to correct a common imprecision (so not ofc addressed to you nothingmuch, but to the reader): the taker-maker model is *not* incompatible with coordinator blinding. 2/ the ability of the coordinator to tag a targeted user by shenanigans with the blinding key, roundID etc. The story you tell on this is interesting. In short, as per the "fundamental weakness" paragraph above, it's the nature of these systems that the user is anonymous and ephemeral and therefore the only "identity" they have is the coin they bring to the join. Given that, attestations being verifiable requires blockchain access as the ground truth. For similar things in Joinmarket's protocol (and rest assured, we had the same requirement, basically), we never had to bat an eye, because we could make calls on the utxo set at any time, since we *force* users to use full nodes. But as you say, it *should* still be fully possible with various kinds of light client ... so I am wondering why the people working on the Wasabi project didn't consider this a sine-qua-non. Why even bother with blinding if you're not going to give the client a surety that the blinding is actually doing anything? On reflection, I can see at least one counter-argument: suppose user2 is looking at user1's signature on the context of the round, and they are given key P for user1's signature and just told "trust me" by the coordinator, and they go ahead, because user2 only has a light client and no merkle proofs. Well, if the coordinator lies about P, the user2 can find out later that day, using a full node or block explorer to check user1's utxo. Now, if the coordinator's message is *signed* so as to be non-repudiable, then user2 can prove to the world that the coordinator lied. Conditional on that signing, I think this counter-argument is strong; in the absence of signing, with repudiable messages instead, then I think it's weak. I guess all this comes into particularly sharp focus now that we have various different Wasabi coordinators. They should all be assumed to be run by the Feds, so to speak, and analyzed from that perspective. (not that that wasn't true with only 1, just that it's more true, now). A few more specific Qs/comments: On the Samourai issue: > Because the key is not announced a priori, nor is it signed by the participants' spending keys before output registration or signing[^5], the server can provide each input with a unique RSA key. Since the unblinded signatures are made by different keys, the server can learn the mapping from inputs to outputs. My gut reaction is to do "permanent key tweaked with context" here, so the client could easily verify, based on remembering the permanent key, that the correct (hash of date plus round number plus whatever) had been applied. But that works in Schnorr family, I don't know if a key tweak can be applied to RSA? Perhaps this question is academic, but I want to know how easily this could have been fixed in practice. (I don't know why they were using RSA, but I could imagine various practical reasons; there were after all known attacks on Schnorr blinded signing). > 2. use of deterministic shuffling in the transaction, ensuring that signatures can only be aggregated in the absence of equivocation (assuming the corresponding Lehmer code has enough bits of entropy) That's an elegant idea; I presume it depends on tx size being large enough (yeah, bits of entropy), but that certainly isn't an issue for the Wa(bi)sabi design. Couldn't a similar trick be played with the coordinator's receiving address (assuming that wasabi still works like that, with a coordinator fee address in the tx)? > it seems to me that if it was controlled by a rational attacker it would not use the overt key tagging attack when covert ways of deanonymizing are available and just as effective. It seems I missed something, here. What covert attacks are possible except for Sybilling, excluding other users from the round? - which is only at best semi-covert. Maybe stuff like timing and tor? Cheers, waxwing/AdamISZ On Monday, January 6, 2025 at 8:31:34 AM UTC-6 Yuval Kogman wrote: > On Mon, 6 Jan 2025 at 14:08, Sjors Provoost wrote: > > > Do we know based on observations or published server-side code whether > > this key was: > > > 1) the same for all time; or > > 2) unique for each round; or > > 3) unique for each registration request > > > > In case of (1) and (2) it would have been possible to detect a targeted* > attack, > > of course only if you were on the lookout. > > Only (2) would be correct behavior. If (3) was performed, then that is > just the tagging attack. If (1) was done, then that would have allowed > clients to stockpile blind signatures in earlier rounds, and register > excess outputs during the output registration phase of later ones to > disrupt them (wasabi 1 had this bug FWIW). > > if the archived code is considered reliable, then it seems (2) was the > implemented behavior: > > > https://github.com/Archive-Samourai-Wallet/whirlpool-server/blob/develop/src/main/java/com/samourai/whirlpool/server/beans/Mix.java#L67 > > > Perhaps if the app kept sufficient logs, it would still be possible to > retroactively > > check this. > > I'm not aware of any such observation efforts. They would require > modifying the client, at least with the archived version that I saw > the `blindingParams` member is not used that way (there are other > debug logs in the whirlpool client, but not with this data). > > However, since the public key is only given in response to input > registration, i.e. after the server has learned of the intended UTXO, > and because in many cases an xpub linking that coin may have also been > revealed to the server, and the server controls the grouping of coins > into sets of 5, it seems to me that if it was controlled by a rational > attacker it would not use the overt key tagging attack when covert > ways of deanonymizing are available and just as effective. > > > * = I’m thinking of an active attacker who wants to track specific UTXOs. > > They could preemptively “persuade” the coordinator server to provide > > a different RSA key or round ID if they ever try to join a round. > > While this is certainly possible, maintaining plausible deniability is > easier if the server merely maliciously control the placement of > UTXOs, ensuring that targeted UTXOs end up only with xpub-revealed > and/or adversary controlled peers. > > > Are these round IDs logged by clients? > > In the case of wasabi, both my recollection and a cursory search > indicates that yes: > > > https://github.com/WalletWasabi/WalletWasabi/blob/42e7963d7fffc7f8f37fd9b6e8973235859ee7fb/WalletWasabi/WabiSabi/LoggerTools.cs#L36 > > I did not check in detail where this information is logged, and I > don't think a list of all published round IDs is logged. > > I would not encourage users to share such logs, or their data, without > careful considerations. Even if logs were scrubbed, revealing a/the > set of rounds in which a user participated can significantly harm > privacy, especially since participation in rounds and coin selection > does not take into account history intersection attacks. See also > these issues re log scrubbing > https://github.com/WalletWasabi/WalletWasabi/issues/6770 > https://github.com/WalletWasabi/WalletWasabi/issues/6670 (first was > closed without fixing, deemed duplicate of 2nd - i'd say it isn't - > which is still open...). > > One of the developers still working on wasabi indicated that there > will finally be some efforts to mitigate this class of attack: > > 1. redundant queries from isolated tor circuits of the round status > information where round IDs are published, and consistency checks for > the data returned > 2. use of deterministic shuffling in the transaction, ensuring that > signatures can only be aggregated in the absence of equivocation > (assuming the corresponding Lehmer code has enough bits of entropy) > > Since round IDs are published ahead of time in the status requests, > and clients explicitly choose which round to join before revealing any > of their intended inputs, the first mitigation is straightforward and > would present a significant barrier. > -- You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group. To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/6a5ac106-6f8d-480d-91f6-0b9796977554n%40googlegroups.com.