Attacks on Threshold Schemes: Part 1

Security

Dec 3, 2025

mpc-attacks-p1-image

Threshold cryptography looks bulletproof on paper. You distribute trust across multiple parties, require a minimum threshold to reconstruct secrets or produce signatures, and suddenly you've got resilience against single points of failure. The math checks out, the security proofs are elegant, and the protocols feel rock-solid.

Then you ship to production and things start breaking. Sometimes it's a missing validation check that nobody thought mattered. Sometimes it's a subtle ambiguity in how data gets encoded. The gap between "provably secure threshold scheme" and "production-ready implementation that won't get your users rekt" is wider than most people realize.

This post walks through real attacks on real threshold systems, the kind that security auditors have actually found in the wild. The goal isn't to scare you away from threshold schemes, they're still one of the best tools we have for distributed trust. The goal is to give you a practical checklist of what actually goes wrong, so you can audit your own implementation with clear eyes before it hits mainnet.

Pedersen DKG: Wrong-Degree Polynomial Attack

During an audit of Frost, Trail of Bits discovered an interesting vulnerability. The participants do not check the length of the polynomial coefficients, meaning the polynomial can be of much higher degree which would result in an inability to recover the key.

Pedersen's distributed key generation builds on Feldman's verifiable secret sharing, which extends Shamir's scheme by letting participants verify their shares without a trusted dealer. The basic flow: each party PiP_i generates a random degree-tt polynomial pi(x)=ai,0+ai,1x++ai,txtp_i(x) = a_{i,0} + a_{i,1}x + \cdots + a_{i,t}x^t where ai,0a_{i,0} is their secret contribution. They publish commitments to each coefficient:

Ai,0=gai,0,Ai,1=gai,1,,Ai,t=gai,tA_{i,0} = g^{a_{i,0}}, \quad A_{i,1} = g^{a_{i,1}}, \quad \ldots, \quad A_{i,t} = g^{a_{i,t}}

Then PiP_i sends the evaluation si,j=pi(j)s_{i,j} = p_i(j) privately to each party PjP_j. Recipient PjP_j verifies their share by checking:

gsi,j=?k=0tAi,kjkg^{s_{i,j}} \stackrel{?}{=} \prod_{k=0}^{t} A_{i,k}^{j^k}

To form the final shared secret, each party PjP_j sums all shares they received: sj=i=1nsi,js_j = \sum_{i=1}^{n} s_{i,j}. The collective polynomial is p(x)=i=1npi(x)p(x) = \sum_{i=1}^{n} p_i(x), which has degree tt and secret s=p(0)=i=1nai,0s = p(0) = \sum_{i=1}^{n} a_{i,0}. Any t+1t+1 parties can recover ss via Lagrange interpolation.

The Attack

During an audit of FROST implementations, they discovered that many don't validate the length of the commitment vector. A malicious PiP_i can submit a polynomial of degree T>tT > t, say, degree 2t2t or even degree nn. Since the final polynomial degree is the maximum of all input degrees, this silently raises the reconstruction threshold from t+1t+1 to T+1T+1.

If TnT \geq n, the secret becomes unrecoverable: no subset of parties can reconstruct it, even with everyone's cooperation. All shares are now locked, and any funds controlled by that key are permanently lost. Even if T<nT < n, signature attempts with only t+1t+1 parties will mysteriously fail, and implementations may blame honest participants for the failure.

The fix is trivial: check that each commitment vector has exactly t+1t+1 elements. Trail of Bits found ten vulnerable implementations, including multiple FROST variants and GG18/GG20 libraries.

Missing Discrete Log Check in MtA

The Multiplicative-to-Additive (MtA) protocol is common in threshold schemes, it converts shares of a product a×ba \times b into additive shares α+βa×b(modq)\alpha + \beta \equiv a \times b \pmod{q}. Most implementations include zero-knowledge proofs to ensure parties behave honestly, but these proofs require careful setup.

The Setup

Each party generates auxiliary parameters for the ZK proofs:

  • An RSA modulus N~=pq\tilde{N} = pq (factorization kept secret)
  • Two group elements h1,h2ZN~h_1, h_2 \in \mathbb{Z}_{\tilde{N}}^* that should be unrelated (no known discrete log between them)

The commitment in the proof typically looks like z=h1mh2ρmodN~z = h_1^m h_2^{\rho} \mod \tilde{N}, where mm is the value being proven and ρ\rho is blinding randomness. The security relies on the prover not knowing logh1(h2)\log_{h_1}(h_2), if they did, they could break the proof's soundness.

The Attack

The protocol should include a check that h1h_1 and h2h_2 generate the same cyclic subgroup and that their discrete log relation is unknown. But many implementations skip this verification entirely, trusting the sender to have generated them correctly.

An attacker exploits this by setting h2=1h_2 = 1. Now the commitment collapses:

z=h1mh2ρ=h1m1ρ=h1mmodN~z = h_1^m h_2^{\rho} = h_1^m \cdot 1^{\rho} = h_1^m \mod \tilde{N}

The blinding term vanishes, and zz directly reveals h1mh_1^m. To extract mm, the attacker has two options:

Option 1: Integer logarithm Choose N~\tilde{N} extremely large so that computing discrete logs in ZN~\mathbb{Z}_{\tilde{N}}^* becomes as hard as computing integer logarithms, which is trivial with standard algorithms.

Option 2: Pohlig-Hellman Choose N~\tilde{N} as a product of many small primes: N~=p1p2pk\tilde{N} = p_1 p_2 \cdots p_k where each pip_i is small. The order of ZN~\mathbb{Z}_{\tilde{N}}^* is then ϕ(N~)=(p11)(p21)(pk1)\phi(\tilde{N}) = (p_1-1)(p_2-1)\cdots(p_k-1), which is smooth. Using Pohlig-Hellman, the attacker computes mmod(pi1)m \bmod (p_i-1) for each factor (fast, since the factors are small), then reconstructs mm via the Chinese Remainder Theorem.

Either way, the attacker recovers mm and breaks the proof. This leaks the victim's secret value that was supposed to be protected by the zero-knowledge property.

This vulnerability was discovered in Binance's TSS library and described in the paper "Attacking Threshold Wallets".

The Fix: The sender should prove in zero-knowledge that logh1(h2)\log_{h_1}(h_2) is unknown and that h1h_1 and h2h_2 generate the same cyclic group. Additionally, the Paillier modulus N~\tilde{N} must be proven to be the product of two primes of sufficient size.

BitGo Wallet: Missing Proof of Knowledge

Proofs of knowledge aren't just formalities, they're critical checkpoints that prevent malicious parties from feeding garbage into the protocol. The BitGo wallet vulnerability shows what happens when you skip them.

The Protocol

BitGo's TSS implementation follows the GG18 key generation protocol, which has three phases:

  1. Commit: Each player PiP_i selects a secret key share uiu_i and broadcasts:

    • A commitment guig^{u_i} to their secret
    • A Paillier public key EiE_i with modulus NiN_i
  2. Reveal & Share: Each player opens their commitment (reveals uiu_i), then performs Feldman VSS to distribute shares of uiu_i. The final shared secret is x=uix = \sum u_i, with public key y=gx=guiy = g^x = \prod g^{u_i}. Each player ends up with a share xix_i such that any threshold subset can reconstruct xx.

  3. Prove: Each player should prove in zero-knowledge:

    • Knowledge of their share xix_i (proof of knowledge)
    • That their Paillier modulus Ni=piqiN_i = p_i q_i is the product of exactly two primes (biprimality proof)

BitGo's implementation skipped phase 3 entirely. No proofs. Just trust.

The Attack

The attacker exploits the missing biprimality proof by choosing a malicious Paillier modulus N=pqN = pq where q=2p+1q = 2p + 1. This is a safe prime construction, but with a twist: the multiplicative group ZN\mathbb{Z}_N^* has a special structure that leaks information. The attacker also picks any square VZNV \in \mathbb{Z}_N^* (say, V=4V = 4) and broadcasts (N,V)(N, V) as their Paillier parameters.

During the protocol, an honest party PjP_j (the victim) will encrypt their share under the attacker's key. They send:

C=VxEnc(β)modN2C = V^x \cdot \text{Enc}(\beta) \mod N^2

where xx is the victim's secret share and β\beta is some randomness. The Paillier encryption is Enc(β)=(1+N)βrN\text{Enc}(\beta) = (1+N)^\beta r^N for random rr.

Now the attacker reduces CC modulo qq:

CVx(1+N)βrNVxrN(modq)C \equiv V^x \cdot (1+N)^\beta r^N \equiv V^x \cdot r^N \pmod{q}

The (1+N)β(1+N)^\beta term vanishes mod qq since N0(modq)N \equiv 0 \pmod{q}. Next, the attacker computes:

Cp+1(VxrN)p+1Vx(p+1)rN(p+1)(modq)C^{p+1} \equiv (V^x r^N)^{p+1} \equiv V^{x(p+1)} r^{N(p+1)} \pmod{q}

Here's the key observation: in Zq\mathbb{Z}_q^*, the order is q1=2pq-1 = 2p. Every square (like VV) has order dividing pp, and rpq1(modq)r^{pq} \equiv 1 \pmod{q} by Fermat's little theorem. So:

Cp+1Vx(p+1)Vx(modq)C^{p+1} \equiv V^{x(p+1)} \equiv V^x \pmod{q}

because Vp1(modq)V^p \equiv 1 \pmod{q}.

Now the attacker has VxmodqV^x \bmod q. Since q=2p+1q = 2p+1 is small enough (the attacker chose it), they can brute-force the discrete log to recover xmodqx \bmod q.

Scaling the Attack

If qq is too large for a single brute-force, the attacker can use a product of many such pairs: N=p1q1p16q16N = p_1q_1 \cdots p_{16}q_{16} where each qi=2pi+1q_i = 2p_i + 1 and each pip_i is small (say, 16 bits). They run the attack in parallel to recover xmodqix \bmod q_i for each ii, then use the Chinese Remainder Theorem to reconstruct xx.

With 16 such pairs and 16-bit primes, the attacker recovers a 256-bit secret share in a few hours on commodity hardware. Once they have one share, they can sign arbitrary transactions unilaterally, completely defeating the threshold property.

This vulnerability was discovered by Fireblocks during a security analysis of BitGo's implementation and described in their blog post.

The Fix: Include the missing proofs. The Paillier modulus NiN_i must be proven biprime (product of exactly two large primes), and each player must prove knowledge of their share xix_i.

Not So Secret Factorization

Sometimes the attack isn't a missing check, it's that the implementation gives the prover knowledge they were never supposed to have. This vulnerability in a two-party ECDSA reshare protocol shows how knowing "too much" can be weaponized.

The Reshare Protocol

Key resharing is standard practice in production TSS wallets, you periodically rotate shares to limit exposure. The protocol looks simple enough:

  1. Alice generates a fresh Paillier keypair (n,sk)(n', sk')
  2. Alice sends a new ciphertext c=Enc(x1)c' = \text{Enc}(x_1') under her new key, where x1=x1+rx_1' = x_1 + r (her old share plus randomness)
  3. Alice proves in zero-knowledge that cc' encrypts a value linearly related to her old ciphertext c=Enc(x1)c = \text{Enc}(x_1), i.e., c=cEnc(r)c' = c \oplus \text{Enc}(r) (using Paillier homomorphism)

Bob receives cc', verifies the proof, and updates his own share to x2=x2rx_2' = x_2 - r. The combined secret remains unchanged: x1+x2=(x1+r)+(x2r)=x1+x2=xx_1' + x_2' = (x_1 + r) + (x_2 - r) = x_1 + x_2 = x.

The Problem

The zero-knowledge proof for step 3 requires two moduli n1n_1 and n2n_2. The security assumption from the literature: "Let n1n_1 be a large composite number whose factorization is unknown by Alice and Bob, and n2n_2 be another large number, prime or composite whose factorization is known or unknown by Alice."

But in this implementation, Alice knows both factorizations, she generated both Paillier keys herself. She can now turn the reshare protocol into an oracle that leaks Bob's share bit by bit.

The Attack

Alice manipulates her new share to be x1=x1+kqx_1'' = x_1' + kq where kk is chosen so that:

niqx1+kq<ni|n_i| - q \leq x_1 + kq < |n_i|

Bob doesn't detect anything wrong. He combines Alice's malicious share with his rotated share:

x12=x1+x2=(x1+kq)+(x2r)=x1+x2+kqx_{12} = x_1'' + x_2' = (x_1' + kq) + (x_2 - r) = x_1 + x_2 + kq

When they run the signing protocol, Bob sends his partial signature to Alice. Alice decrypts the combined result and reduces it mod qq. But during Paillier decryption, a reduction mod nin_i happens first. This is where the oracle kicks in:

  • If x2<nikqx1x_2 < n_i - kq - x_1, then the mod nin_i reduction doesn't wrap around. The final signature verifies correctly.
  • If x2nikqx1x_2 \geq n_i - kq - x_1, then the mod nin_i reduction changes the value. The signature fails verification.

Alice just learned whether x2x_2 crosses a specific threshold. She records the result.

Extracting the Secret

By repeating this reshare-and-sign cycle with different values of kk and nin_i, Alice narrows down Bob's share. Each successful or failed signature reveals one bit of information about x2x_2. With enough iterations (proportional to the bit length of x2x_2), Alice reconstructs Bob's entire share.

Once she has both x1x_1 and x2x_2, the threshold property collapses. Alice can sign any transaction unilaterally.

This issue was found during an analysis of a production two-party ECDSA implementation and is detailed in the paper "Attacking Threshold Wallets".

The Fix

The ZK proof must enforce that Alice doesn't know the factorization of n1n_1. Better yet, the library maintainers did what made most sense: they replaced the vulnerable reshare with the full two-party key generation protocol. Alice and Bob run a coin flip to refresh their shares and repeat all ZK proofs from DKG. It's slower, but it's correct, and it doesn't give Alice an oracle.

Ambiguous Hash Encoding

Fiat-Shamir is supposed to be straightforward: hash the transcript, use the output as your challenge. But if you're sloppy about how you encode the transcript, you hand the prover a degree of freedom they're not supposed to have.

The Flaw

When converting an interactive proof to non-interactive via Fiat-Shamir, you need to hash multiple values together, group elements, integers, commitments, etc. A naive approach: concatenate everything with a delimiter.

H(α1,,D,,α2,,D,,,,D,,αn)H(\alpha_1 ,|, D ,|, \alpha_2 ,|, D ,|, \ldots ,|, D ,|, \alpha_n)

where DD is some delimiter byte sequence (say, || or a null byte). The problem: during the "opening" phase, there's ambiguity. Is the first element α1\alpha_1, or is it α1,,D,,α2\alpha_1 ,|, D ,|, \alpha_2? If the prover can shift boundaries around, they can manipulate which parts of the input get interpreted as which values, without changing the hash output.

Example: Discrete Log Proof

Consider a simple discrete log proof (like the one used in threshold signature ZK subprotocols):

  1. Prover commits: Pick random ρ\rho, send α=hρ\alpha = h^\rho (or α=hρg1\alpha = h^\rho g^{-1} depending on strategy)
  2. Challenge: Compute challenge bits c=H(g,h,N,α1,α2,,αλ)c = H(g, h, N, \alpha_1, \alpha_2, \ldots, \alpha_\lambda) where λ\lambda is the number of repetitions
  3. Response: For each iteration ii, send τi=ρi+cisecret\tau_i = \rho_i + c_i \cdot \text{secret}

The proof is repeated λ\lambda times to amplify soundness. Each iteration contributes one challenge bit cic_i.

The Attack

The attacker (playing the prover) wants to forge a proof for loghg\log_h g without knowing it. Here's how:

  1. Guess the challenge bits: The attacker anticipates how many 1 bits will appear in the challenge c=(c1,,cλ)c = (c_1, \ldots, c_\lambda). Say they guess kk ones.

  2. Craft ambiguous commitments: For each iteration ii, the attacker prepares two possible α\alpha values:

    • α1=hρi\alpha^1 = h^{\rho_i} if they plan to answer as if ci=0c_i = 0
    • α2=hρig1\alpha^2 = h^{\rho_i} g^{-1} if they plan to answer as if ci=1c_i = 1
  3. Commit ambiguously: The attacker sends a byte stream that looks like α1,,D,,α2,,D,,\alpha_1 ,|, D ,|, \alpha_2 ,|, D ,|, \ldots but can be parsed in multiple ways. Specifically, they construct it so that after hashing, they can "shuffle" which bytes belong to which αi\alpha_i based on the resulting challenge bits.

  4. Reinterpret after seeing the challenge: Once the hash outputs challenge bits cic_i, the attacker retroactively decides how to parse their commitment. If ci=0c_i = 0, they claim they sent α1=hρi\alpha^1 = h^{\rho_i}. If ci=1c_i = 1, they claim they sent α2=hρig1\alpha^2 = h^{\rho_i} g^{-1}. In both cases, they respond with τi=ρi\tau_i = \rho_i.

  5. Verification passes: The verifier checks hτi=?αigcih^{\tau_i} \stackrel{?}{=} \alpha_i \cdot g^{c_i}. If the attacker correctly shuffled:

    • For ci=0c_i = 0: hρi=α1h^{\rho_i} = \alpha^1
    • For ci=1c_i = 1: hρi=α2g=(hρig1)gh^{\rho_i} = \alpha^2 \cdot g = (h^{\rho_i} g^{-1}) \cdot g

The key insight: as long as the number of aa tokens in the byte stream matches what the hash function expects (regardless of how they're interpreted), the hash remains unchanged. The attacker rearranges (α,α,,α,β,β,,β)(\alpha, \alpha, \ldots, \alpha, \beta, \beta, \ldots, \beta) to match the challenge bits, and it all verifies.

This vulnerability (the α-shuffle attack) was discovered by Verichains on Binance's tss-lib, Multichain and a few others. The affected implementations and full technical details are described in their blog post.

The Fix Use unambiguous encoding. Don't just concatenate with delimiters. Modern protocols like FROST and CGGMP20 mandate specific serialization formats precisely to avoid this. Don't roll your own concatenation scheme.

Not Enough Soundness

Remember that discrete log proof we just talked about? It gets repeated λ\lambda times, and the soundness error is 2λ2^{-\lambda}. To forge a proof without knowing the discrete log, an attacker needs to guess all λ\lambda challenge bits correctly, probability 2λ2^{-\lambda}. If you want 128-bit security, you need λ=128\lambda = 128 repetitions. Straightforward.

The Attack

Some production implementations used λ=80\lambda = 80, or even λ=32\lambda = 32. At λ=80\lambda = 80, an attacker has a 2802^{-80} chance of forging a proof, feasible with enough compute. At λ=32\lambda = 32, it's trivial.

The TSSHOCK paper found libraries that just... set the parameter too low. No clever cryptographic insight needed. Just brute-force the challenge space until you get lucky.

This attack (c-guess) was also discovered by Verichains in Multichain's implementation.

The Fix: Set λ128\lambda \geq 128. Read the security proof. Use the parameters the paper recommends, not whatever seemed "good enough" during implementation.

Non-Existent Inverse

So you read the previous section and thought, "let's just use a larger challenge space instead of repeating the protocol 128 times, sample cc from all 256-bit strings instead of just 0,1{0,1}." Sounds reasonable. One round, bigger challenge, same security. Except it doesn't work in composite-order groups.

The Flaw

In the standard dlnproof, you're proving knowledge of loghg\log_h g in a group of order ord(g)\text{ord}(g). The protocol:

  1. Prover commits α=gρ\alpha = g^\rho for random ρ\rho
  2. Challenge c0,1,,22561c \in {0, 1, \ldots, 2^{256}-1} (large range, single round)
  3. Prover responds τ=ρ+csecret\tau = \rho + c \cdot \text{secret}

The verifier checks gτ=?αhcg^\tau \stackrel{?}{=} \alpha \cdot h^c. This works fine in prime-order groups. But in composite-order groups (like ZN~\mathbb{Z}_{\tilde{N}}^* for an RSA modulus N~\tilde{N}), things break.

The Attack

Suppose the attacker wants to forge a proof for loghg\log_h g when this discrete log doesn't exist. Specifically, let h=geh = g^e where ee is a small divisor of ord(g)\text{ord}(g) and 2ord(g)2 \mid \text{ord}(g): loghg=1e\log_h g = \frac{1}{e} doesn't exist in Zord(g)\mathbb{Z}_{\text{ord}(g)} because ee has no multiplicative inverse modulo ord(g)\text{ord}(g) (since gcd(e,ord(g))1\gcd(e, \text{ord}(g)) \neq 1).

The attacker proceeds as follows:

  1. Brute-force random ρ\rho values until the resulting challenge c=H()c = H(\ldots) is divisible by ee. Since cc is 256 bits, the probability that ece \mid c is roughly 1e\frac{1}{e}. If ee is small (say, 32 bits), this takes negligible time.
  2. Once ece \mid c, the attacker constructs the proof: (α,τ)=(gρmodN~,ρ+ce)(\alpha, \tau) = (g^\rho \bmod \tilde{N}, \rho + \frac{c}{e}).
  3. Verification:
gτ=gρ+c/e=gρgc/e=α(ge)c/e=αhcg^\tau = g^{\rho + c/e} = g^\rho \cdot g^{c/e} = \alpha \cdot (g^e)^{c/e} = \alpha \cdot h^c

The proof passes, even though the attacker doesn't know loghg\log_h g.

Why It Works In a prime-order group, ce\frac{c}{e} wouldn't be well-defined unless ee has an inverse. But in composite-order groups with a small factor ee dividing the order, the attacker can "split" the challenge by that factor and forge proofs for non-existent discrete logs.

This is the third attack (c-split) in Verichains’ TSSHOCK suite and affects Axelar, ZenGo and. ING. Full details can be found in their article.

The Fix Either:

  • Stick with prime-order groups (like the elliptic curve subgroup, not ZN~\mathbb{Z}_{\tilde{N}}^*)
  • Use the multi-round version with binary challenges (80–128 rounds)
  • If you must use composite-order groups with large challenges, ensure the group order is prime or prove that all relevant discrete logs exist before running the protocol

This is why modern threshold ECDSA protocols work over elliptic curves (prime-order subgroups) for signature generation, and only use RSA groups (composite order) carefully in auxiliary ZK proofs with proper parameter checks.

Small Paillier Attack

Even when range proofs are included, implementation bugs can still cause problems. This attack exploits a missing validation on the Paillier modulus size itself.

The Setup

In MtAwc with range proofs, Bob (the party with secret bb) sends a zero-knowledge proof that includes:

  1. A challenge eZqe \in \mathbb{Z}_q (computed via Fiat-Shamir)
  2. A random blinding value γZN\gamma \in \mathbb{Z}_N^* (where NN is Alice's Paillier public key)
  3. A proof element t1=eβ+γt_1 = e\beta' + \gamma (computed over the integers), where β\beta' is Bob's secret from the MtA protocol

The proof is supposed to hide β\beta'. The security relies on γ\gamma being large enough to mask β\beta' when divided by ee.

The Attack

The attacker (Alice) generates a Paillier modulus NN that's slightly smaller than 22562^{256}. If the protocol doesn't explicitly validate the key size, this gets through.

Now look at what happens when Bob computes t1=eβ+γt_1 = e\beta' + \gamma:

t1e=β+γe\frac{t_1}{e} = \beta' + \frac{\gamma}{e}

Normally, γZN\gamma \in \mathbb{Z}_N^* with N22048N \approx 2^{2048} and e2256e \approx 2^{256}, so γe\frac{\gamma}{e} is large enough to hide β\beta' when divided by ee. But if Nq2256N \approx q \approx 2^{256}, then γ\gamma is only about 22562^{256} bits. Now:

  • With probability 12\frac{1}{2}, we have γe<1\frac{\gamma}{e} < 1 (i.e., γ<e\gamma < e)
  • With probability 12161 - 2^{-16}, we have γe<215\frac{\gamma}{e} < 2^{15} (i.e., less than 15 bits)

This means βt1e\beta' \approx \lfloor \frac{t_1}{e} \rfloor with high probability. The blinding almost vanishes.

Extracting the Secret

Alice also knows that with NqN \approx q, if she sets her input a=1a = 1 in the MtA protocol, the decrypted value α=b+β\alpha' = b + \beta' (over the integers) falls into one of two ranges:

  1. If b+β<Nb + \beta' < N: No modular reduction, so α=b+β\alpha = b + \beta'
  2. If b+β[N,2N)b + \beta' \in [N, 2N): Reduction occurs, so α=b+βN\alpha = b + \beta' - N

Alice knows α\alpha (she decrypted it) and can approximate βt1e\beta' \approx \lfloor \frac{t_1}{e} \rfloor from the proof. She tries several candidates:

βt1emodq,,t1e1modq,,t1e2modq,\beta \in {\lfloor \tfrac{t_1}{e} \rfloor \bmod q, , \lfloor \tfrac{t_1}{e} \rfloor - 1 \bmod q, , \lfloor \tfrac{t_1}{e} \rfloor - 2 \bmod q, \ldots}

For each candidate β\beta, she computes two possibilities for Bob's secret bb :

  1. b(1)=αβmodqb^{(1)} = \alpha - \beta \bmod q (assuming no wraparound)
  2. b(2)=αβ+Nmodqb^{(2)} = \alpha - \beta + N \bmod q (assuming wraparound)

She checks which one is correct by verifying gb(1)=?gbg^{b^{(1)}} \stackrel{?}{=} g^b or gb(2)=?gbg^{b^{(2)}} \stackrel{?}{=} g^b, where gbg^b is Bob's public key (known to everyone).

One of these will match. Alice has recovered Bob's secret bb in a single signature.

Why It Works

Two bugs compound:

  1. Wrong range for γ\gamma: The ZK proof spec says γ\gamma should be sampled from Zq2N\mathbb{Z}_{q^2 N}, not ZN\mathbb{Z}_N^*. Using ZN\mathbb{Z}_N^* breaks honest-verifier zero-knowledge, dividing by ee leaks bits of β\beta' regardless of Paillier modulus size.
  2. Missing Paillier size check: Even with the wrong range, if NN were properly validated to be 22048\geq 2^{2048}, the leakage would be negligible. But without validation, Alice can choose NqN \approx q and completely remove the blinding.

Together, these bugs turn a single signature into a full key extraction.

This is the Small Paillier Attack from "Alpha-Rays: Key Extraction Attacks on Threshold ECDSA Implementations". The paper reports impact on Binance’s tss-lib and multiple forks.

Incorrect Broadcast Channel

Broadcast channels in MPC aren't just a nice-to-have, they're a security requirement. The assumption is when a party broadcasts a value, everyone receives the same value. But some implementations cut corners and implement "broadcast" as a loop that sends individual messages to each participant. That's not the same thing, and here's why it matters.

The Attack on Verifiable Secret Sharing

Recall that in Feldman VSS (the basis for Pedersen DKG), the dealer commits to a polynomial p(x)=a0+a1x++at1xt1p(x) = a_0 + a_1 x + \cdots + a_{t-1} x^{t-1} by publishing commitments yi=gaiy_i = g^{a_i} for each coefficient. Each party PjP_j receives a share sj=p(j)s_j = p(j) and verifies it using:

gsj=?i=0t1yijig^{s_j} \stackrel{?}{=} \prod_{i=0}^{t-1} y_i^{j^i}

The commitments y0,y1,,yt1{y_0, y_1, \ldots, y_{t-1}} are supposed to be broadcast, everyone sees the same vector.

Breaking the Broadcast Assumption

Suppose the adversary wants to forge a sharing for a secret a0a_0 they don't know. Specifically, they want everyone to accept y0=ga0y_0 = g^{a_0} as the commitment to the secret, but the adversary doesn't actually know a0a_0.

If broadcast is implemented as individual messages, the adversary can send different commitments to each party:

  1. Create a truncated polynomial: Pick p(x)=a2x2+a3x3++at1xt1p(x) = a_2 x^2 + a_3 x^3 + \cdots + a_{t-1} x^{t-1} (omit a0a_0 and a1a_1)
  2. Send shares: Compute sj=p(j)s_j = p(j) for each party PjP_j and send them (these are valid evaluations of the truncated polynomial)
  3. Compute most commitments honestly: For i2i \geq 2, compute yi=gaiy_i = g^{a_i} (these are the same for everyone)
  4. Customize y1y_1 for each party: For each party PjP_j, compute a personalized commitment:
y1,j=(y01)j1y_{1,j} = {(y_0^{-1})}^{j^{-1}}
  1. Send different vectors: Send the vector y0,y1,j,y2,,yt1{y_0, y_{1,j}, y_2, \ldots, y_{t-1}} to party PjP_j

image

Why Verification Passes

Party PjP_j checks:

gsj=?i=0t1yijig^{s_j} \stackrel{?}{=} \prod_{i=0}^{t-1} y_i^{j^i}

Expanding the right side:

i=0t1yiji=y0j0y1,jj1y2j2yt1jt1\prod_{i=0}^{t-1} y_i^{j^i} = y_0^{j^0} \cdot y_{1,j}^{j^1} \cdot y_2^{j^2} \cdots y_{t-1}^{j^{t-1}}

Substituting y1,j=(y01)j1y_{1,j} = {(y_0^{-1})}^{j^{-1}}:

=y0(y0j1)jy2j2yt1jt1=y0y01y2j2yt1jt1=y2j2yt1jt1= y_0 \cdot (y_0^{-j^{-1}})^j \cdot y_2^{j^2} \cdots y_{t-1}^{j^{t-1}} = y_0 \cdot y_0^{-1} \cdot y_2^{j^2} \cdots y_{t-1}^{j^{t-1}} = y_2^{j^2} \cdots y_{t-1}^{j^{t-1}}

The y0y_0 and y1y_1 terms cancel out! And since sj=p(j)=a2j2++at1jt1s_j = p(j) = a_2 j^2 + \cdots + a_{t-1} j^{t-1}, we have:

gsj=ga2j2++at1jt1=y2j2yt1jt1g^{s_j} = g^{a_2 j^2 + \cdots + a_{t-1} j^{t-1}} = y_2^{j^2} \cdots y_{t-1}^{j^{t-1}}

The check passes for every party, even though the adversary doesn't know a0a_0 or a1a_1. Each party thinks they have a valid sharing of the secret corresponding to y0=ga0y_0 = g^{a_0}, but the adversary never computed consistent shares for that secret.

This attack was described in Jake Craige’s write-up "VSS Forgery Against Bad Broadcast Channels".

Closing Thoughts

Threshold schemes remain one of our best tools for distributed trust, but the gap between "provably secure protocol" and "production-ready implementation" is real. The attacks covered here aren't exhaustive, they're a snapshot of what auditors have found. The pattern is clear: missing checks and skipped proofs. Before you ship threshold signatures to production, audit for these classes of bugs. Check parameter sizes. Validate inputs. And include all the proofs the paper specifies. The math works. Make sure your implementation does too.