Using NOISE_FLOODING_DECRYPT for CKKS in a multiparty setting

Hello.

I am trying to use the NOISE_FLOODING_DECRYPT mode for CKKS in a multiparty setting in the Python version of OpenFHE.

This post Support for NOISE_FLOODING_MULTIPARTY in threshold CKKS? - Library Questions - OpenFHE talks about using this example openfhe-development/src/pke/examples/ckks-noise-flooding.cpp at main · openfheorg/openfhe-development and scaling the noise estimate by the number of clients. When I try to do this I either get wrong results or I get a runtime error such as
/usr/local/include/openfhe/pke/scheme/ckksrns/gen-cryptocontext-ckksrns-internal.h:l.90:genCryptoContextCKKSRNSInternal(): Precision of less than 3 bits is not supported. logstd 16.792481 + noiseEstimate 60.571271 must be 56 or less.

As an example I have the following code, where openfhe is imported as fhe. When I run it with num_clients = 3 I get wrong results, e.g.,

Results of homomorphic computations
#1 + #2 + #3 =    (-536.83891, 54.510953, -111.15191, ... ); Estimated precision: 9 bits

while if I run it with a higher number such as 7 I get the runtime error above.

def generate_cc(noise_estimate = None):
    # securityLevel = fhe.SecurityLevel.HEStd_NotSet
    securityLevel = fhe.SecurityLevel.HEStd_128_classic
    # securityLevel = fhe.SecurityLevel.HEStd_256_classic
    multDepth = 0
    ringDim = 2**13
    sigma = 3.19
    secretKeyDist = fhe.UNIFORM_TERNARY
    q_0_bits = 60 # 60 bits fungerer ihvertfall.
    delta_bits = 50

    # Sample Program: Step 1: Set CryptoContext
    parameters = fhe.CCParamsCKKSRNS()
    parameters.SetSecurityLevel(securityLevel)
    parameters.SetMultiplicativeDepth(multDepth)
    parameters.SetRingDim(ringDim)
    batchsize = ringDim//2 # Int division for å få heltall istedenfor .0
    parameters.SetBatchSize(batchsize) 
    parameters.SetSecretKeyDist(secretKeyDist)
    parameters.SetStandardDeviation(sigma)
    parameters.SetKeySwitchTechnique(fhe.BV)
    parameters.SetScalingModSize(delta_bits)
    parameters.SetFirstModSize(q_0_bits)
    parameters.SetDecryptionNoiseMode(fhe.NOISE_FLOODING_DECRYPT)
    parameters.SetScalingTechnique(fhe.FIXEDAUTO)
    if noise_estimate:
        print(f"Noise estimate is {noise_estimate}")
        parameters.SetExecutionMode(fhe.EXEC_EVALUATION)
        parameters.SetNoiseEstimate(noise_estimate)
    else:
        print("No noise estimate.")
        parameters.SetExecutionMode(fhe.EXEC_NOISE_ESTIMATION)

    cc = fhe.GenCryptoContext(parameters)
    # Enable features that you wish to use
    cc.Enable(fhe.PKE)
    cc.Enable(fhe.KEYSWITCH)
    cc.Enable(fhe.LEVELEDSHE)
    cc.Enable(fhe.ADVANCEDSHE)
    cc.Enable(fhe.MULTIPARTY)

    return cc

def do_computations_multiparty(num_clients, cc, sks, jpk):

    real1 = 16667.23
    real2 = -12123.213
    real3 = 1223.9991
    plaintext1 = cc.MakeCKKSPackedPlaintext([real1, real2, real3])
    plaintext2 = cc.MakeCKKSPackedPlaintext([real2, real3, real1])
    plaintext3 = cc.MakeCKKSPackedPlaintext([real3, real1, real2])

    # The encoded vectors are encrypted
    ciphertext1 = cc.Encrypt(jpk, plaintext1)
    ciphertext2 = cc.Encrypt(jpk, plaintext2)
    ciphertext3 = cc.Encrypt(jpk, plaintext3)

    # Homomorphic additions
    ciphertext_add12 = cc.EvalAdd(ciphertext1, ciphertext2)
    ciphertext_add_result = cc.EvalAdd(ciphertext_add12, ciphertext3)

    # Decrypt the result of additions
    pd0 = cc.MultipartyDecryptLead([ciphertext_add_result], sks[0])[0]
    pds = [pd0]
    for i in range(1,num_clients):
        pds.append(cc.MultipartyDecryptMain([ciphertext_add_result], sks[i])[0])
    plaintext_add_result = cc.MultipartyDecryptFusion(pds)

    print(f"real1 = {real1}")
    print(f"real2 = {real2}")
    print(f"real3 = {real3}")

    print(f"Plaintext #1: {plaintext1.GetRealPackedValue()}")
    print(f"Plaintext #2: {plaintext2.GetRealPackedValue()}")
    print(f"Plaintext #3: {plaintext3.GetRealPackedValue()}")

    # Output Results
    print("\nResults of homomorphic computations")
    
    plaintext_add_result.SetLength(3)
    print(f"#1 + #2 + #3 =    {plaintext_add_result}")

    print(f"Expected result = {[real1+real2+real3, real1+real2+real3, real1+real2+real3]}")

def do_computations_single(cc, sk, pk):
    # Sample Program: Step 3: Encryption

    real1 = 16667.23
    real2 = -12123.213
    real3 = 1223.9991
    plaintext1 = cc.MakeCKKSPackedPlaintext([real1, real2, real3])
    plaintext2 = cc.MakeCKKSPackedPlaintext([real2, real3, real1])
    plaintext3 = cc.MakeCKKSPackedPlaintext([real3, real1, real2])

    # The encoded vectors are encrypted
    ciphertext1 = cc.Encrypt(pk, plaintext1)
    ciphertext2 = cc.Encrypt(pk, plaintext2)
    ciphertext3 = cc.Encrypt(pk, plaintext3)

    # Homomorphic additions
    ciphertext_add12 = cc.EvalAdd(ciphertext1, ciphertext2)
    ciphertext_add_result = cc.EvalAdd(ciphertext_add12, ciphertext3)

    # Decrypt the result of additions
    plaintext_add_result = cc.Decrypt(ciphertext_add_result, sk)
    return plaintext_add_result.GetLogError()

def main():
    # Generate first cc and key pair for noise estimate.
    cc1 = generate_cc()
    # Generate a public/private key pair
    key_pair_single = cc1.KeyGen()
    pk = key_pair_single.publicKey
    sk = key_pair_single.secretKey

    # Noise estimate should be multiplied number of clients.
    noise = do_computations_single(cc1, sk, pk)
    print(f"\nNoise estimate is: {noise}")
    num_clients = 7
    noise = noise * num_clients
    print(f"Scale noise estimate is: {noise}\n")

    # Generate new keys.
    cc2 = generate_cc(noise) 
    key_pair_multi = cc2.KeyGen()
    pks = [key_pair_multi.publicKey]
    sks = [key_pair_multi.secretKey]
    for i in range(num_clients-1):
        new_keys = cc2.MultipartyKeyGen(pks[i])
        pks.append(new_keys.publicKey)
        sks.append(new_keys.secretKey)
    jpk = pks[-1]

    print("\n")
    do_computations_multiparty(num_clients, cc2, sks, jpk)


if __name__ == "__main__":
    main()

Any suggestions to what I am doing wrong is appreciated. Thanks in advance.

Also, in my usecase it might be difficult to know exactly what data will be encrypted before hand. Is it possible to do noise estimation part in the multiparty setting or must it be done with a non-multiparty key pair?

One issue that I see is that in the evaluation mode you still use the NATIVE_SIZE of 64 rather 128 (as suggested in the cited Discourse post). In this case, the 60 bits are not sufficient to store the message + flooding noise. The example itself discusses this issue in detail. Basically, the scaling factor for the evaluation mode will be larger than 60 bits.

Note that starting with 1.5.0, we also support a faster alternative to NATIVE_SIZE=128 by using the COMPOSINGSCALINGAUTO scaling technique. This can also be used for the evaluation mode.

You can use worst-case bounds for encrypted data instead when running the computation in the estimation mode.

You should be able to run it either mode. To avoid multiplying the noise estimate by extra factors, you can run both the estimation and evaluation phases using the mutliparty setting, i.e., using the multiparty OpenFHE API, like in do_computations_multiparty.

Thank you very much for the reply.

I have now tried to use the COMPOSINGSCALINGAUTO scaling technique, but only get an attribute error AttributeError: module 'openfhe' has no attribute 'COMPOSINGSCALINGAUTO'. Did you mean: 'COMPOSITESCALINGAUTO'?. Using COMPOSITESCALINGAUTO does remove the error regarding the size of the noise estimate, since it is now much smaller. I still get incorrect results in the evaluation step though, e.g.,

Results of homomorphic computations
#1 + #2 + #3 =    (202.37097, -8.619025, -99.097631, ... ); Estimated precision: 31 bits

Expected result = [5768.0161, 5768.0161, 5768.0161]

I assume there are other parameters which I am selecting wrong? I have tried using both singleparty and multiparty for the estimation part. The only difference is that the multiparty estimate gets slightly higher estimated precision on the wrong results., e.g.

Results of homomorphic computations
#1 + #2 + #3 =    (202.37097, -8.619025, -99.097631, ... ); Estimated precision: 33 bits

Expected result = [5768.0161, 5768.0161, 5768.0161]

I also tried to use NATIVE_INT 128, just in case, by doing cmake .. -DNATIVE_SIZE=128 when building the C++ version. In that case I also get wrong results, e.g.

Results of homomorphic computations
#1 + #2 + #3 =    (202.37097, -8.619025, -99.097631, ... ); Estimated precision: 33 bits

Expected result = [5768.0161, 5768.0161, 5768.0161]

Am I setting NATIVE_INT to 128 correctly or is there another method for doing it? I assume my other parameter choices are the problem instead?

Lastly, is there any advantage to using NATIVE_INT 128 for the evaluation part instead of COMPOSITESCALINGAUTO scaling mode?

Yes. There was a typo.

From what I can see, you are setting the multiplicative depth to 0. Yet, you add very large numbers (much higher than 1.0 in magnitude). I do not know what the first modulus and scaling mod size are set to in the EVALUATION mode. The difference between them should be large enough to store the inputs and results without an overflow. The difference is probably too small in your case, and this causes an overflow. If this is the cause, you can fix the code by setting the multiplicative depth to 1 (instead of 0).

COMPOSITESCALINGAUTO is faster than NATIVE_INT=128. So the former should be used in practice. In terms of functionality, they should be the same.

Thank you for all the help.

It was indeed an issue of overflow. Setting multiplicative depth to 1 fixed it. Thanks again for all the help.