Security
Dec 16, 2025
Attacks on Threshold Schemes: Part 2
Deep dive into protocol-level vulnerabilities in threshold signature schemes: MtA oracle attacks, reshare synchronization flaws, determinist...
Security
Dec 3, 2025

Threshold cryptography looks bulletproof on paper. You distribute trust across multiple parties, require a minimum threshold to reconstruct secrets or produce signatures, and suddenly you've got resilience against single points of failure. The math checks out, the security proofs are elegant, and the protocols feel rock-solid.
Then you ship to production and things start breaking. Sometimes it's a missing validation check that nobody thought mattered. Sometimes it's a subtle ambiguity in how data gets encoded. The gap between "provably secure threshold scheme" and "production-ready implementation that won't get your users rekt" is wider than most people realize.
This post walks through real attacks on real threshold systems, the kind that security auditors have actually found in the wild. The goal isn't to scare you away from threshold schemes, they're still one of the best tools we have for distributed trust. The goal is to give you a practical checklist of what actually goes wrong, so you can audit your own implementation with clear eyes before it hits mainnet.
During an audit of Frost, Trail of Bits discovered an interesting vulnerability. The participants do not check the length of the polynomial coefficients, meaning the polynomial can be of much higher degree which would result in an inability to recover the key.
Pedersen's distributed key generation builds on Feldman's verifiable secret sharing, which extends Shamir's scheme by letting participants verify their shares without a trusted dealer. The basic flow: each party generates a random degree- polynomial where is their secret contribution. They publish commitments to each coefficient:
Then sends the evaluation privately to each party . Recipient verifies their share by checking:
To form the final shared secret, each party sums all shares they received: . The collective polynomial is , which has degree and secret . Any parties can recover via Lagrange interpolation.
The Attack
During an audit of FROST implementations, they discovered that many don't validate the length of the commitment vector. A malicious can submit a polynomial of degree , say, degree or even degree . Since the final polynomial degree is the maximum of all input degrees, this silently raises the reconstruction threshold from to .
If , the secret becomes unrecoverable: no subset of parties can reconstruct it, even with everyone's cooperation. All shares are now locked, and any funds controlled by that key are permanently lost. Even if , signature attempts with only parties will mysteriously fail, and implementations may blame honest participants for the failure.
The fix is trivial: check that each commitment vector has exactly elements. Trail of Bits found ten vulnerable implementations, including multiple FROST variants and GG18/GG20 libraries.
The Multiplicative-to-Additive (MtA) protocol is common in threshold schemes, it converts shares of a product into additive shares . Most implementations include zero-knowledge proofs to ensure parties behave honestly, but these proofs require careful setup.
The Setup
Each party generates auxiliary parameters for the ZK proofs:
The commitment in the proof typically looks like , where is the value being proven and is blinding randomness. The security relies on the prover not knowing , if they did, they could break the proof's soundness.
The Attack
The protocol should include a check that and generate the same cyclic subgroup and that their discrete log relation is unknown. But many implementations skip this verification entirely, trusting the sender to have generated them correctly.
An attacker exploits this by setting . Now the commitment collapses:
The blinding term vanishes, and directly reveals . To extract , the attacker has two options:
Option 1: Integer logarithm Choose extremely large so that computing discrete logs in becomes as hard as computing integer logarithms, which is trivial with standard algorithms.
Option 2: Pohlig-Hellman Choose as a product of many small primes: where each is small. The order of is then , which is smooth. Using Pohlig-Hellman, the attacker computes for each factor (fast, since the factors are small), then reconstructs via the Chinese Remainder Theorem.
Either way, the attacker recovers and breaks the proof. This leaks the victim's secret value that was supposed to be protected by the zero-knowledge property.
This vulnerability was discovered in Binance's TSS library and described in the paper "Attacking Threshold Wallets".
The Fix: The sender should prove in zero-knowledge that is unknown and that and generate the same cyclic group. Additionally, the Paillier modulus must be proven to be the product of two primes of sufficient size.
Proofs of knowledge aren't just formalities, they're critical checkpoints that prevent malicious parties from feeding garbage into the protocol. The BitGo wallet vulnerability shows what happens when you skip them.
The Protocol
BitGo's TSS implementation follows the GG18 key generation protocol, which has three phases:
Commit: Each player selects a secret key share and broadcasts:
Reveal & Share: Each player opens their commitment (reveals ), then performs Feldman VSS to distribute shares of . The final shared secret is , with public key . Each player ends up with a share such that any threshold subset can reconstruct .
Prove: Each player should prove in zero-knowledge:
BitGo's implementation skipped phase 3 entirely. No proofs. Just trust.
The Attack
The attacker exploits the missing biprimality proof by choosing a malicious Paillier modulus where . This is a safe prime construction, but with a twist: the multiplicative group has a special structure that leaks information. The attacker also picks any square (say, ) and broadcasts as their Paillier parameters.
During the protocol, an honest party (the victim) will encrypt their share under the attacker's key. They send:
where is the victim's secret share and is some randomness. The Paillier encryption is for random .
Now the attacker reduces modulo :
The term vanishes mod since . Next, the attacker computes:
Here's the key observation: in , the order is . Every square (like ) has order dividing , and by Fermat's little theorem. So:
because .
Now the attacker has . Since is small enough (the attacker chose it), they can brute-force the discrete log to recover .
Scaling the Attack
If is too large for a single brute-force, the attacker can use a product of many such pairs: where each and each is small (say, 16 bits). They run the attack in parallel to recover for each , then use the Chinese Remainder Theorem to reconstruct .
With 16 such pairs and 16-bit primes, the attacker recovers a 256-bit secret share in a few hours on commodity hardware. Once they have one share, they can sign arbitrary transactions unilaterally, completely defeating the threshold property.
This vulnerability was discovered by Fireblocks during a security analysis of BitGo's implementation and described in their blog post.
The Fix: Include the missing proofs. The Paillier modulus must be proven biprime (product of exactly two large primes), and each player must prove knowledge of their share .
Sometimes the attack isn't a missing check, it's that the implementation gives the prover knowledge they were never supposed to have. This vulnerability in a two-party ECDSA reshare protocol shows how knowing "too much" can be weaponized.
The Reshare Protocol
Key resharing is standard practice in production TSS wallets, you periodically rotate shares to limit exposure. The protocol looks simple enough:
Bob receives , verifies the proof, and updates his own share to . The combined secret remains unchanged: .
The Problem
The zero-knowledge proof for step 3 requires two moduli and . The security assumption from the literature: "Let be a large composite number whose factorization is unknown by Alice and Bob, and be another large number, prime or composite whose factorization is known or unknown by Alice."
But in this implementation, Alice knows both factorizations, she generated both Paillier keys herself. She can now turn the reshare protocol into an oracle that leaks Bob's share bit by bit.
The Attack
Alice manipulates her new share to be where is chosen so that:
Bob doesn't detect anything wrong. He combines Alice's malicious share with his rotated share:
When they run the signing protocol, Bob sends his partial signature to Alice. Alice decrypts the combined result and reduces it mod . But during Paillier decryption, a reduction mod happens first. This is where the oracle kicks in:
Alice just learned whether crosses a specific threshold. She records the result.
Extracting the Secret
By repeating this reshare-and-sign cycle with different values of and , Alice narrows down Bob's share. Each successful or failed signature reveals one bit of information about . With enough iterations (proportional to the bit length of ), Alice reconstructs Bob's entire share.
Once she has both and , the threshold property collapses. Alice can sign any transaction unilaterally.
This issue was found during an analysis of a production two-party ECDSA implementation and is detailed in the paper "Attacking Threshold Wallets".
The Fix
The ZK proof must enforce that Alice doesn't know the factorization of . Better yet, the library maintainers did what made most sense: they replaced the vulnerable reshare with the full two-party key generation protocol. Alice and Bob run a coin flip to refresh their shares and repeat all ZK proofs from DKG. It's slower, but it's correct, and it doesn't give Alice an oracle.
Fiat-Shamir is supposed to be straightforward: hash the transcript, use the output as your challenge. But if you're sloppy about how you encode the transcript, you hand the prover a degree of freedom they're not supposed to have.
The Flaw
When converting an interactive proof to non-interactive via Fiat-Shamir, you need to hash multiple values together, group elements, integers, commitments, etc. A naive approach: concatenate everything with a delimiter.
where is some delimiter byte sequence (say, || or a null byte). The problem: during the "opening" phase, there's ambiguity. Is the first element , or is it ? If the prover can shift boundaries around, they can manipulate which parts of the input get interpreted as which values, without changing the hash output.
Example: Discrete Log Proof
Consider a simple discrete log proof (like the one used in threshold signature ZK subprotocols):
The proof is repeated times to amplify soundness. Each iteration contributes one challenge bit .
The Attack
The attacker (playing the prover) wants to forge a proof for without knowing it. Here's how:
Guess the challenge bits: The attacker anticipates how many 1 bits will appear in the challenge . Say they guess ones.
Craft ambiguous commitments: For each iteration , the attacker prepares two possible values:
Commit ambiguously: The attacker sends a byte stream that looks like but can be parsed in multiple ways. Specifically, they construct it so that after hashing, they can "shuffle" which bytes belong to which based on the resulting challenge bits.
Reinterpret after seeing the challenge: Once the hash outputs challenge bits , the attacker retroactively decides how to parse their commitment. If , they claim they sent . If , they claim they sent . In both cases, they respond with .
Verification passes: The verifier checks . If the attacker correctly shuffled:
The key insight: as long as the number of tokens in the byte stream matches what the hash function expects (regardless of how they're interpreted), the hash remains unchanged. The attacker rearranges to match the challenge bits, and it all verifies.
This vulnerability (the α-shuffle attack) was discovered by Verichains on Binance's tss-lib, Multichain and a few others. The affected implementations and full technical details are described in their blog post.
The Fix Use unambiguous encoding. Don't just concatenate with delimiters. Modern protocols like FROST and CGGMP20 mandate specific serialization formats precisely to avoid this. Don't roll your own concatenation scheme.
Remember that discrete log proof we just talked about? It gets repeated times, and the soundness error is . To forge a proof without knowing the discrete log, an attacker needs to guess all challenge bits correctly, probability . If you want 128-bit security, you need repetitions. Straightforward.
The Attack
Some production implementations used , or even . At , an attacker has a chance of forging a proof, feasible with enough compute. At , it's trivial.
The TSSHOCK paper found libraries that just... set the parameter too low. No clever cryptographic insight needed. Just brute-force the challenge space until you get lucky.
This attack (c-guess) was also discovered by Verichains in Multichain's implementation.
The Fix: Set . Read the security proof. Use the parameters the paper recommends, not whatever seemed "good enough" during implementation.
So you read the previous section and thought, "let's just use a larger challenge space instead of repeating the protocol 128 times, sample from all 256-bit strings instead of just ." Sounds reasonable. One round, bigger challenge, same security. Except it doesn't work in composite-order groups.
The Flaw
In the standard dlnproof, you're proving knowledge of in a group of order . The protocol:
The verifier checks . This works fine in prime-order groups. But in composite-order groups (like for an RSA modulus ), things break.
The Attack
Suppose the attacker wants to forge a proof for when this discrete log doesn't exist. Specifically, let where is a small divisor of and : doesn't exist in because has no multiplicative inverse modulo (since ).
The attacker proceeds as follows:
The proof passes, even though the attacker doesn't know .
Why It Works In a prime-order group, wouldn't be well-defined unless has an inverse. But in composite-order groups with a small factor dividing the order, the attacker can "split" the challenge by that factor and forge proofs for non-existent discrete logs.
This is the third attack (c-split) in Verichains’ TSSHOCK suite and affects Axelar, ZenGo and. ING. Full details can be found in their article.
The Fix Either:
This is why modern threshold ECDSA protocols work over elliptic curves (prime-order subgroups) for signature generation, and only use RSA groups (composite order) carefully in auxiliary ZK proofs with proper parameter checks.
Even when range proofs are included, implementation bugs can still cause problems. This attack exploits a missing validation on the Paillier modulus size itself.
The Setup
In MtAwc with range proofs, Bob (the party with secret ) sends a zero-knowledge proof that includes:
The proof is supposed to hide . The security relies on being large enough to mask when divided by .
The Attack
The attacker (Alice) generates a Paillier modulus that's slightly smaller than . If the protocol doesn't explicitly validate the key size, this gets through.
Now look at what happens when Bob computes :
Normally, with and , so is large enough to hide when divided by . But if , then is only about bits. Now:
This means with high probability. The blinding almost vanishes.
Extracting the Secret
Alice also knows that with , if she sets her input in the MtA protocol, the decrypted value (over the integers) falls into one of two ranges:
Alice knows (she decrypted it) and can approximate from the proof. She tries several candidates:
For each candidate , she computes two possibilities for Bob's secret :
She checks which one is correct by verifying or , where is Bob's public key (known to everyone).
One of these will match. Alice has recovered Bob's secret in a single signature.
Why It Works
Two bugs compound:
Together, these bugs turn a single signature into a full key extraction.
This is the Small Paillier Attack from "Alpha-Rays: Key Extraction Attacks on Threshold ECDSA Implementations". The paper reports impact on Binance’s tss-lib and multiple forks.
Broadcast channels in MPC aren't just a nice-to-have, they're a security requirement. The assumption is when a party broadcasts a value, everyone receives the same value. But some implementations cut corners and implement "broadcast" as a loop that sends individual messages to each participant. That's not the same thing, and here's why it matters.
The Attack on Verifiable Secret Sharing
Recall that in Feldman VSS (the basis for Pedersen DKG), the dealer commits to a polynomial by publishing commitments for each coefficient. Each party receives a share and verifies it using:
The commitments are supposed to be broadcast, everyone sees the same vector.
Breaking the Broadcast Assumption
Suppose the adversary wants to forge a sharing for a secret they don't know. Specifically, they want everyone to accept as the commitment to the secret, but the adversary doesn't actually know .
If broadcast is implemented as individual messages, the adversary can send different commitments to each party:

Why Verification Passes
Party checks:
Expanding the right side:
Substituting :
The and terms cancel out! And since , we have:
The check passes for every party, even though the adversary doesn't know or . Each party thinks they have a valid sharing of the secret corresponding to , but the adversary never computed consistent shares for that secret.
This attack was described in Jake Craige’s write-up "VSS Forgery Against Bad Broadcast Channels".
Threshold schemes remain one of our best tools for distributed trust, but the gap between "provably secure protocol" and "production-ready implementation" is real. The attacks covered here aren't exhaustive, they're a snapshot of what auditors have found. The pattern is clear: missing checks and skipped proofs. Before you ship threshold signatures to production, audit for these classes of bugs. Check parameter sizes. Validate inputs. And include all the proofs the paper specifies. The math works. Make sure your implementation does too.