Zero-knowledge protocols represent a fascinating class within cryptographic techniques. These protocols have been designated as ‘zero-knowledge’ because they enable a particular interaction: during their execution, one party (the prover) can convincingly demonstrate to another (the verifier) the possession of specific secret knowledge without actually revealing the secret itself. More precisely, these protocols ensure that no additional information about the secret is disclosed to the verifier, beyond what can be inferred from the initial data available prior to the protocol’s execution. Thus, no leakage of information about the key occurs during a zero-knowledge protocol.
This aspect is particularly crucial because such protocols are constructed using public keys that inherently contain complete information about the owner’s private secret key. The notion and implications of ‘zero disclosure’ in this context will be further examined.
The potential application of zero-knowledge proof protocols is primarily in the verification of the authenticity of remote entities (users). These protocols are recognized as provably robust dual-key cryptographic schemes.
The typical implementation of such protocols involves three fundamental steps:
- The prover generates a one-time public key, known as a commitment.
- The verifier produces a random challenge.
- The prover computes a response to this challenge.
Besides, these protocols hold significant relevance in their potential conversion into digital signature schemes.
In zero-knowledge protocols, computationally challenging problems of the following types are employed:
- The discrete logarithm problem in a finite cyclic group (or a finite field).
- The factorization problem.
- The problem of extracting roots modulo a large prime number.
- The graph coloring problem and several others.
Interpreting the Concept of ‘Zero Knowledge of Secrets’
A critical methodological aspect in studying zero-knowledge protocols is elucidating the term ‘zero knowledge of secrets’. This concept is inherently linked to the protocol’s resistance to various types of attacks. The robustness of cryptographic schemes (algorithms and protocols) forms a central theme in cryptography. In this context, symmetric cryptosystems are distinguished by conditional (practical) and unconditional (theoretical) robustness.
Practical robustness is associated with the notion that current computational resources (the state of computational technology) do not allow the cryptosystem to be broken within a foreseeable time frame, although it is theoretically possible. Theoretical robustness suggests the resilience of a cryptosystem even in the face of an attacker with infinite computational resources. By their nature, dual-key cryptosystems cannot possess theoretical robustness and are classified under practically robust cryptosystems. The concept of provable robustness is applied to dual-key cryptosystems. A dual-key cryptosystem is said to be provably robust if it can be formally demonstrated that breaking the cryptosystem is no easier than solving the computationally hard problem that underpins it.
Zero-knowledge protocols, which rest on computationally challenging tasks used for computing the public key from a private key, can also be considered under provably robust cryptosystems. Thanks to the practical impossibility of the reverse task — computing the private key from the public key — the public key can be made a publicly accessible protocol parameter. Consequently, a potential intruder possesses the information theoretically necessary to compute the private key. This information is the public key, but the computation of the private key from it remains an infeasible task. Hence, despite the theoretical completeness of this information, the intruder lacks the practical means to exploit it. If no new information is added to the aforementioned data during the authentication protocol, it is termed as zero leakage of information about the secret key or zero disclosure of the secret.
Consider the scenario where the owner of the public key, used in the authentication procedure, does not transmit any knowledge about the secret to the verifying party during the protocol execution. It is evident that the owner of the public key (the prover) must inevitably use their private key. However, the protocol allows the verifier to ascertain, with an arbitrarily high probability, that the authenticated entity indeed knows the secret key. The understanding that the values transmitted by the prover do not contain information about the private key implies that these values do not simplify the potential intruder’s task of computing the private key based on the associated public key.
This is the case in the following scenarios:
- The prover transmits to the verifier a value that is already known to the latter.
- The prover sends to the verifier a value that the latter can independently compute before receiving the response.
- The prover sends random values to the verifier.
- The prover sends random pairs or sets of random values to the verifier, satisfying a certain verification relationship that includes the public key. However, statistically indistinguishable pairs or sets of random values can be generated by an intruder using only the public key of the prover.
Statistical indistinguishability (statistical equivalence) of sets of random values, formed by the intruder, from those formed by the owner of the public key using their personal secret key, is understood in the sense that these sets can be equally used in attempts to compute the secret key from the known public key.
For instance, consider a verification relationship of the form:
p is a sufficiently large prime number;
α is a number of prime order r modulo p;
y is the prover’s public key, computed using the private secret key x
(where x < q) and the formula:
R is the value of the commitment (one-time public key), generated using a random secret value k (k < q) and the formula:
w is the response sent by the prover via an open communication channel.
The set of values:
i=1,2,…,q, encompasses all possible values of the commitment R. With an equally probable random choice of k, satisfying k < q, the random value R becomes equally probable, i.e., it assumes any value from the set
with equal likelihood. It is easy to see that the response w is calculated by the prover using the formula:
therefore, with an equally probable choice of k, the value of w becomes equally probable, i.e., all possible values of w have the same probability, equal to q^-1. All possible pairs of values ( w, R ), generated by the prover and satisfying the relationship (1), have the same probability, equal to q^-1.
Similarly, equally probable pairs ( w, R ) can be generated by an intruder. To do this, the intruder generates a random equally probable value w < q and calculates the value
With such a method of generating pairs (w, R), satisfying the relationship (1), all possible values of these pairs have the same probability q^-1. Thus, if the presence of equally probable random pairs (w, R ) from their possible value domain somehow simplifies the discrete logarithm problem modulo a prime, then the intruder can independently generate equally probable random pairs (w, R ) in the required quantity without waiting for the execution of the zero-knowledge protocol.
Stay curious, keep learning, and delve deeper into the Aleo ecosystem — the journey is just beginning. Join the community here: