-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Discussion: Post-quantum security and ethical considerations over elliptic curve cryptography #131
Comments
Putting this discussion here for good measure: #106 |
Switch commitments which bind to the amount are insufficient. A QC cannot forge how much they're worth yet can claim a spent output unspent. This is only detectable if we have more XMR migrated than legitimate or more outputs migrated than legitimate. We need to bind to the addresses and enforce a derivation scheme for them. An address is derived as follows:
For any Please note the lack of bounds on how Switch Commitments, as originally defined, were just ElGamal commitments (not perfectly blinding with a process to switch to perfectly binding). The proposal for an ElGamal commitment, hashed and summed with the blinding factor in a Pedersen Commitment, was made later as an optimization. I honestly don't know why that proposal exists as it does. If we assume a QC cannot find a preimage, then simply defining the PC randomness as If we define the PC randomness as
If an adversary with a QC attempts to open the PC with distinct randomness, they'll lack the preimage. If an adversary with a QC attempts to open the address This allows us to provide key images for outputs made under FCMPs++, without forgeries, even once a QC exists. The ability to prove key images is the ability to maintain proving transactions weren't prior spent and ownership. An adversary with your address can find the outputs you've received. They cannot immediately find the place of spend given an address with a A malicious output which sends to address A yet claims to send to address B can be created, as described in #130. The sender would not know the discrete logarithms for the address B without a QC if randomly selected. With a QC, they wouldn't know the necessary preimages. Address B can be derived from the discrete logarithms of A, but again, without the necessary preimages. A few notes:
Prior designs were about embedding a PQ signature verification key into addresses, avoided here by defining knowledge of
|
#105 for the existing switch commitments issue. https://github.com/kayabaNerve/monero-pq for my initial sketches of commitments for a PQ composition. |
We can't have this relationship for addresses if we want to support subaddresses. The relationship between I'm confused as to why we should include I do agree though, that without a post-quantum secure range proofs on El Gamal commitments, making the PC blinding factor a function of some El Gamal commitment is pointless, since we need to reveal the blinding factor anyways. If we're revealing the blinding factor, a simple preimage will suffice. |
Heard regarding subaddresses instead of standard addresses, as I sketched.
This isn't true if we open in a ZK proof as I proposed.
This isn't true as unspentness requires a functioning key image system. A functioning key image system requires binding to a spend key, its
So key images still work.
But key images wouldn't.
This still requires knowing if unspent or not which requires a functioning key image system. The exact attack is I have 100 XMR now, churn it 1,000 times, then migrate each output once for a total of 100,000 XMR despite 999 of those outputs having already been spent.
It has infinite key images with FCMPs++, to an adversary with a QC, and isn't sufficient by itself. |
We don't need the address view pubkey if we provide a way to derive the one-time sender extensions with a hash, not letting the prover actually provide them. And then we shouldn't need the address spend pubkey either if we make proving secret knowledge of some random address intractable for QCs, as you're suggesting. Let's say that we are given an address spend pubkey
But all of them are intractable to find, even with a QC, if we verify addresses to be constructed a certain way. I understand that we need key images to be intact for the migration, but I'm saying we also need to not use FCMPs for the membership part of the proofs, since a QCs can fake an element being inside a set with FCMPs, but they can't fake a fetch from a DB. Thus, we need to reference outputs explicitly and do key images checks on that output pubkey. |
I never proposed using FCMPs at this point (at least, never on purpose). See my third note. I proposed a PQ ZK proof for the migration itself. If we don't create "addresses" as I originally stated, yet the root key pair, subaddresses still work so long as the future migration proof performs subaddress derivation in-circuit. I do hear I didn't say anything about subaddresses, agree those are critical, and would be fine only supporting subaddresses to be honest. I'd love to discuss further optimizations. I'm discussing chucking everything to what would be a complicated, expensive, future proof. If we can achieve less work within that proof, or remove the need for it to be ZK entirely, great. I can't yet comment on your sketch above but will try to do so later. |
Root key pairs now are I prior wrote about We generate CARROT defines an Fundamentally, the inevitable migration requires verifying the root public spend key, its derivation into a subaddress public spend key, and its derivation into a one-time-key. I'll repeat the obvious, hard constraints:
I'll add the correction This means we to verify If we don't use a ZK proof, the preimages for each of these terms will be leaked. If there are currently any secrets (such as the ECDH) in there, there must have an additional hash performed so they're no longer present (as already done by CARROT AFAIK) to not be leaked in such an event. We can generate CARROT does have a multiplicative scalar in its subaddress derivation. That needs to have its preimage proven for, and derivatives checked, as I'll also clarify the multisig migration path. A multisig generates If we are to decide a PQ scheme now, instead of expecting a ZK proof to open Since I'm unhappy with the idea of some giant ZK-STARK doing several Blake2s proofs, I believe the loss in privacy to solely 'common ownership of these outputs, prior unspent' is acceptable. If anyone wishes to not face the loss in privacy, they can migrate while the PQ scheme simultaneously runs, before we disable spending FCMP++ outputs with the ECC proofs. With all of the above, then Carrot Pedersen commitments can just have their randomness be the hash of their randomness (as @jeffro256 pointed out). Its the key itself which enforces its own verifiability after the fact. I'm sorry for not realizing that's what you were communicating sooner, jeffro. It entirely slipped past me. This scheme here probably just redoes the key scheme tevador already did. This can be done as a distinct topic entirely from switch commitments (with distinct timing too). |
I agree with almost everything here, and what's nice about this scheme is that no modifications to Carrot need to be made to support switch commitments without revealing the private view key. Since the PC blinding factor is already defined as a hash of a hash of the ECDH pubkey, we can reveal the hash of the ECDH in the PQ migration. The one thing that we can't do currently is:
We cannot bind the one-time sender extensions to the subaddress spend key because the the receiver doesn't know which subaddress they are scanning for until they unwrap it, which is known after calculating the extensions and subtracting from the one-time output pubkey. JAMTIS was able to solve this by encrypting the "address tag" to the receiver, a bit of information which told the receiver which subaddress was the target. However, if we want to support legacy addresses, we don't have this luxury. |
Actually I will propose one difference: including the amount in the hash-to-PC-blinding-factor. Consider the two constructions: If a quantum adversary randomly generates Now consider the second scheme, brute forcing pairs of |
Completely heard on also binding to the amount. What if we remove commitments as soft targets and bind to the spend key there? Then the output key map check can be done to find which spend key was sent to, and then we can recreate the commitment. Spitballing, I haven't considered the consequences of that at all. I know you threw out amenability earlier to that idea but I'm unsure you fully scoped it when you did, jeffro. |
I think binding to the address spend pubkey Both are DAGs. Also, the receiver still doesn't need a subaddress lookahead table to recompute amount commitments since they can unwrap |
This issue has been created to centralize discussion around post-quantum mitigations for next monero hard forks.
A reminder:
Moratorium
I created this issue because I do agree with @kayabaNerve that there is an ethical aspect in the current MRL roadmap. In most pessimistic scenarios, Y2Q could happen in 5 years, while most optimistic scenarios expect it to happen in ~10-25 years. Pressure is actually mounting in the industry and NIST have already standardized some PQ algorithms (Kyber, SPHINCS+, Dilithium) and more are waiting to be standardized or being used after study (Falcon, S-NTRU).
We are at a turning point, where it would become harder for monero users to defend themselves from the total de-anonymization of the blockchain
10 years is the amount of time, most legal service providers using BTC or XMR are keeping payment information in their database.
10 years is the average amount of time for the statute of limitations in democratic nations
10 years is half the amount of time for the statute of limitations in non-democratic nations (China, Russia).
Monero promises privacy, security and untraceability, and while most users may just hold or spend in perfectly legal situations in their jurisdiction, some users are actually trusting this technology to ensure their freedom of speech or with their live at stake.
That we want it or not, the Monero Project and research community have a part of responsibility in ensuring that future usage of Monero will not retroactively endanger them.
I therefore agree with @kayabaNerve, a parallel effort must absolutely be started on implementing post-quantum security for Monero, with ultimate goal to be seen in production in at most 5 years, as FCMP++ and JamtisRCT already provides privacy improvements against a ECDLP solver.
The text was updated successfully, but these errors were encountered: