Back to articles
Advanced

Threshold Signature Ceremony Attacks: How a Single Malicious Participant Biases Key Generation in FROST

MPC wallets have become the infrastructure layer of institutional crypto custody. The promise is compelling: no single key, no single point of failure.

@0xrafasecFebruary 18, 2026decentralized_systems_security

Available in Português

Share:

Threshold Signature Ceremony Attacks: How a Single Malicious Participant Biases Key Generation in FROST

Legal & Ethical Disclaimer

This content is provided for EDUCATIONAL and AUTHORIZED SECURITY TESTING purposes only.

DO
  • Use these techniques on systems you own or have explicit written permission to test
  • Practice in authorized lab environments (VulnHub, HackTheBox, DVWA, etc.)
  • Follow responsible disclosure practices when finding vulnerabilities
  • Use knowledge for defensive security and authorized penetration testing
DO NOT
  • Access systems without explicit authorization
  • Use these techniques for malicious purposes
  • Deploy exploits against production systems you don't own
  • Share working exploits for unpatched vulnerabilities

Legal warning

Unauthorized access to computer systems is illegal in most jurisdictions (e.g. CFAA in the US, Computer Misuse Act in the UK). Violators may face criminal prosecution and civil liability. The author and publisher assume no liability for misuse of this information. By continuing, you agree to use this knowledge ethically and legally.

Hook & Context

MPC wallets have become the infrastructure layer of institutional crypto custody. The promise is compelling: no single key, no single point of failure. But this promise rests on a ceremony that most engineers treat as a black box — the Distributed Key Generation (DKG) protocol. When you deploy FROST in a t-of-n signing quorum, the security of every subsequent signature is entirely contingent on the integrity of that bootstrapping ceremony. If the DKG is compromised, the wallet is compromised — silently, permanently, and retroactively.

FROST (Flexible Round-Optimized Schnorr Threshold Signatures) is increasingly the default choice for production threshold systems. It is fast, produces compact Schnorr-compatible signatures, and has solid academic backing. But the FROST DKG inherits a class of vulnerabilities that are not unique to FROST — they stem from the fundamental challenge of combining independently generated randomness from mutually distrusting parties. A single malicious participant who deviates precisely during the polynomial broadcast phase can bias the group's shared secret toward a value they know, effectively reconstructing the group private key alone, while every honest participant believes the ceremony completed correctly.

This piece dissects that attack in full. We cover the commitment-reveal structure of the DKG, the exact deviations a rogue participant makes, what the attack looks like from the honest participants' perspective (spoiler: nothing), and the cryptographic controls — verifiable secret sharing commitments, abort-on-equivocation — that implementers must enforce before calling a DKG ceremony production-ready.


TL;DR

QuestionAnswer
What is the attack?A rogue participant manipulates their DKG polynomial so the aggregated group secret is biased toward a value they can reconstruct
When is it possible?During the key generation ceremony — before any signatures are produced
Who is affected?Any FROST deployment that does not enforce VSS commitment verification and equivocation detection
Is it detectable?Not by inspection alone — requires cryptographic commitment checks and broadcast consistency proofs
What stops it?Feldman VSS commitments + Proof of Knowledge of secret coefficient + abort-on-equivocation

Foundations & Theory

Schnorr Signatures as the Base Layer

FROST produces threshold Schnorr signatures. In a standard Schnorr scheme, a signer holds private key x, publishes X = x·G, and produces signatures (R, s) such that s·G = R + H(R, X, m)·X. The security of the group key in FROST means that the corresponding scalar x is split across participants using Shamir's Secret Sharing, and no t-1 participants can reconstruct it. The group public key X is publicly known; the scalar x must never exist in any single location.

Shamir Secret Sharing Revisited

In (t, n) Shamir's Secret Sharing, a dealer generates a polynomial f(z) = a_0 + a_1·z + ... + a_{t-1}·z^{t-1} where a_0 is the secret. Each of n participants receives the evaluation f(i) for their index i. Any t shares allow reconstruction via Lagrange interpolation; fewer than t shares reveal nothing about a_0. The critical property: the secret is the constant term of the polynomial.

In FROST's DKG, there is no dealer. Every participant is simultaneously a sharer and a share-receiver. Each participant i generates their own polynomial f_i(z) with a secret constant term a_{i,0}. The group secret is the sum x = Σ a_{i,0} across all participants. The group public key is X = x·G = Σ a_{i,0}·G.


Where It Fits in the Workflow

Loading diagram…

The attack window is entirely within Rounds 1 and 2. Once the ceremony completes without abort, the damage is done. This is why detection controls must be enforced inside the ceremony, not as a post-hoc audit.


Key Concepts in Depth

1. The Commitment-Reveal Phase (Round 1)

In FROST's DKG, Round 1 requires each participant i to:

  1. Sample a random polynomial f_i(z) = a_{i,0} + a_{i,1}·z + ... + a_{i,t-1}·z^{t-1}
  2. Compute Feldman VSS commitments: C_{i,k} = a_{i,k}·G for each coefficient k
  3. Compute a Proof of Knowledge (PoK) of a_{i,0} — a Schnorr proof that they know the discrete log of C_{i,0}
  4. Broadcast {C_{i,0}, C_{i,1}, ..., C_{i,t-1}, PoK_i} to all participants

The commitments bind the participant to a specific polynomial before shares are exchanged. The PoK prevents the rogue key attack by ensuring the participant knows their own secret coefficient — they cannot claim a public key contribution they cannot sign with.

The security guarantee here is entirely conditional on the broadcast being consistent. Every participant must receive the same commitment vector from participant i. If participant i sends different commitment vectors to different recipients, they are equivocating — and this is the primary attack surface.

2. The Rogue Participant's Deviation

Here is the precise attack. Suppose participant m is malicious in a (2, 3) threshold group with participants {1, 2, 3} where m = 3.

Legitimate behavior (what Round 2 requires):

  • Participant 3 sends share f_3(1) to participant 1 and f_3(2) to participant 2
  • Both shares must be consistent with the committed polynomial: f_3(j)·G = Σ_k C_{3,k}·j^k

Rogue behavior — the bias attack:

Without equivocation detection, participant m can do the following:

  1. Observe the Round 1 commitments from all other participants C_{1,0} and C_{2,0}
  2. Compute what the group public key would be if they contributed honestly: X_target = C_{1,0} + C_{2,0} + C_{m,0}
  3. Choose a_{m,0} after seeing others' commitments, targeting a value where a_{m,0} makes the group secret a value x_target that participant m has precomputed and stored

This is the rogue key attack in its simplest form. The PoK requirement (step 3 of Round 1) is specifically designed to close this: because participant m must commit to C_{m,0} = a_{m,0}·G and prove knowledge of a_{m,0} before seeing others' coefficients — or at least, simultaneously in an atomic broadcast — they cannot retroactively choose a_{m,0} to cancel out the honest contributions.

The residual attack vector — when the PoK exists but equivocation does not — is more subtle:

Participant m broadcasts different C_{m,0} values to different subsets of participants. To participant 1 they send C'_{m,0}, and to participant 2 they send C''_{m,0}. Each honest participant verifies the shares they receive against the commitment they received, so both pass local verification. But the group is now operating on an inconsistent view of the public key, and participant m has engineered a scenario where the actual aggregated key is one they control.

3. Verifiable Secret Sharing Commitments as the Defense

Feldman VSS commitments transform Shamir's scheme into one where share validity is publicly verifiable. When participant i sends share s_{i,j} = f_i(j) to participant j, recipient j verifies:

s_{i,j} · G == Σ_{k=0}^{t-1} C_{i,k} · j^k

If this check fails, participant j knows participant i sent an inconsistent share and can raise an accusation. This closes the inconsistent share attack but not the inconsistent commitment broadcast attack. A rogue participant who sends consistent (but different) commitment vectors to different participants can still pass per-recipient VSS checks.

Loading diagram…

This diagram illustrates why per-recipient verification alone is insufficient.

4. Abort-on-Equivocation: The Missing Enforcement Layer

The complete defense requires a consistent broadcast primitive — effectively ensuring every participant receives the same Round 1 message from participant m. In practice, this is implemented by:

  • Commitment to a public bulletin board: All Round 1 messages are posted to an append-only, publicly readable channel (a smart contract, a public blockchain, or an authenticated broadcast service). Every participant reads all others' commitments from the same source before proceeding to Round 2.
  • Cross-participant commitment comparison: After receiving Round 1 messages, participants compare hashes of others' commitment vectors out-of-band or via a second round of acknowledgments.
  • Equivocation proofs: If participant j and participant k can both prove they received different commitments from participant m — by presenting signed/authenticated messages — the protocol must abort and exclude m.

The FROST specification (as described in IETF draft draft-irtf-cfrg-frost) explicitly requires that commitment lists used during signing are consistent and authenticated. However, the DKG phase is underspecified in many implementations, leaving the consistent broadcast guarantee to the deployment environment. This is the gap that real-world implementations routinely miss.

5. The SageMath Verification Angle

For auditors and implementers, SageMath provides an ideal environment to verify ceremony transcripts. Given a complete set of published commitment vectors {C_{i,k}} and the claimed group public key X, a verifier can check:

python
# In SageMath
# Verify group key derivation consistency
X_derived = sum(C[i][0] for i in participants)
assert X_derived == X_claimed, "Group key inconsistency detected"

# Verify each share against its commitment
for j in participants:
    for i in participants:
        lhs = shares[i][j] * G
        rhs = sum(C[i][k] * pow(j, k) for k in range(t))
        assert lhs == rhs, f"Invalid share from {i} to {j}"

This kind of transcript verification should be a standard post-ceremony audit step before any funds are committed to addresses derived from the group key.


Alternatives & Comparison

ProtocolDKG TypeEquivocation ProtectionPoK RequirementRound Complexity
FROSTPedersen DKGDeployment-dependent✅ Yes (Schnorr PoK)2 rounds
GG20Feldman VSS + ZKPsPartial (ZK proofs)✅ Yes4 rounds
ECDSA MPC (Lindell)Paillier-basedStrong (UC model)✅ Yes3 rounds
DKLs18/DKLs19Oblivious transferStrong✅ Yes2 rounds
Shamir (naive)Dealer-based❌ None❌ No1 round

GG20 (used by many early institutional MPC implementations) includes more elaborate zero-knowledge proofs during key generation but has its own well-documented vulnerabilities — notably the 2021 disclosure of a malicious key generation attack by Verichains. DKLs19 has stronger security proofs and is increasingly preferred for new deployments, but is more complex to implement correctly. The consistent broadcast problem is present in all multi-party DKGs to varying degrees; what differs is how explicitly each protocol specifies the mitigation.


Takeaways & Further Reading

Further Reading & References

Found this article interesting? Follow me on X and LinkedIn.