Proof of Robustness in DSS Algorithms

Illy’s Web3 blog
3 min readDec 28, 2023

--

To substantiate the security of digital signature schemes (DSS) based on the complexity of the discrete logarithm problem, an approach can be utilized that synthesizes zero-knowledge protocols, subsequently deriving a DSS whose robustness needs justification. Within this framework, it can be demonstrated that for several known DSS, the size of the signature’s randomizing parameter can be halved without compromising robustness. This feasibility stems from the fact that for a resilient DSS, a low probability of generating an anticipated query for a fixed document suffices, and to mitigate attacks related to the potential modification of the signed document in the DSS verification equation, an additional value of another hash function (with double the bit size) computed solely from the document can be included.

Consider the rationale for the robustness of a DSS constructed in the aforementioned context. Assuming that altering the fixator after computing the query E = FH(q, M) is practically unfeasible (essentially an assumption about the hash function’s robustness), the following points can be stated:

  1. Generating any number of signatures does not simplify the task of computing the signer’s Least Common Secret Key (LCSK) from their Open Key (OK).
  2. Signature forgery can be executed with a 2^(-h) probability using low-complexity procedures. To achieve a high probability of successful forgery for a specific document, about 2^h modular multiplication operations are required.
  3. For a high probability of successful forgery in an attack allowing for the modification of the signed document, approximately 2^(h/2) modular multiplication operations are necessary. This forgery complexity is determined by the computational difficulty of finding a hash function collision, utilizing the birthday paradox.

These facts suggest that the developed DSS scheme is as robust as the zero-knowledge protocol it is based on. Assuming the robustness of the utilized hash function, i.e., the practical impossibility of replacing the fixator after obtaining the query E, the algorithm for its breach has a computational complexity of the same order as the difficult task it is based on. Suppose an attack exists that allows signature forgery without exploiting hash function weaknesses, i.e., calculating the response W for a given query value using different hash functions FH and F’H. Then, using this attack, one can generate a random fixator value q and compute two correct responses W and W’ for each case of FH and F’H use, resulting in different queries E = FH(q, M) and E’ = F’H(q, M). The correct responses satisfy the following equations:

By dividing the first equation by the second, we obtain:

Repeating this procedure can yield a sufficient number of such relationships, from which it’s easy to find the representation of OK elements (t1, t2, …, th) as ti = si^k mod p for some known value si (for all values i = 1, 2, …, h). This implies that the hypothesized attack solves the computationally difficult task of extracting roots of a large prime degree k mod p = Nk^2 + 1, i.e., the hypothetical attack has a complexity of the same order as solving the difficult task used.

In a similar manner, formal proof of the robustness of DSS with small OK sizes based on the difficulty of extracting roots of a large prime degree by a prime modulus with a special structure can be provided. This can be done by proposing a zero-knowledge protocol, from which a DSS scheme, whose robustness needs justification, is subsequently derived.

--

--

No responses yet