EvalAdd does not add in some cases

Hello together,

we are using OpenFHE and CKKS for quite a while now but lately we discovered some strange behaviour around the EvalAdd(InPlace) method.
In some cases it does not add up the numbers and everything stays as it its (value, level, precision, …). It happens for Ciphertext-Plaintext addition as well as for Ciphertext-Ciphertext addition.
However, if we split the value (let’s say in 20 pieces) and add them together, it works.
We tried in several environments and with different crypto parameters but it happened everytime. However, the value at which this behaviour occurs changes and is not always the same.
Even when we try to multiply the values with (2*0.5) before it does not change anything, addition still does nothing.

We provide a MWE here:

uint32_t multDepth = 50;
uint32_t scaleModSize = 59; 
CCParams<CryptoContextCKKSRNS> parameters;
parameters.SetMultiplicativeDepth(multDepth);
parameters.SetScalingModSize(scaleModSize);
CryptoContext<DCRTPoly> cc = GenCryptoContext(parameters);
cc->Enable(PKE);
cc->Enable(LEVELEDSHE);
auto keys = cc->KeyGen();
auto val = cc->Encrypt(keys.publicKey, cc->MakeCKKSPackedPlaintext(vector<double>{1}));
for(int i = 0; i<41; i++){
        auto added = cc -> EvalAdd(val, i);
        Plaintext p;
        cc->Decrypt(keys.secretKey, added, &p);
        p->SetLength(1);
        cout << "Added " << to_string(i) << ":\t" << p << endl;
}

Depending on the environment, it fails for i=5 or larger values

Is this a bug?

1 Like

Hey there!

Thanks for providing us with the MWE. That sped things up on our end. So, I can confirm that I encounter the same bug. Having said that, for a depth of 30, the bug is not present. So, you might want to drop the depth and just bootstrap.

I made this issue: When mult depth is high, repeated additions produces incorrect result and you can follow that to track it.

Hopefully this unblocks you and gives you a path forward (dropping the depth + bootstrapping)

1 Like

Hi @lujoho,

First, I would like to remark that using a depth of 50 is typically not a good idea for performance reasons. In many practical scenarios, 35-40 is at most that makes sense.

However, this issue occurs for a smaller scaling factor, too. For instance, I get it for

uint32_t multDepth = 5;
uint32_t scaleModSize = 50; 

This bug is specific to the FLEXIBLEAUTOEXT scaling technique, which is used by default. If you change it anything else, e.g., FIXEDMANUAL, FIXEDAUTO, or FLEXIBLEAUTO, the addition will work. In other words, you can add the following line to your code to fix the MWE:

parameters.SetScalingTechnique(FLEXIBLEAUTO);

I will now update the issue.

1 Like

Hi @iquah, @ypolyakov,

thank you very much for your quick replies and kind tips!
I will reduce my depth and use more bootstrapping in the future.

Changing the scaling technique made my workaround obsolete and everything works fine now

Hey together,

sorry to get back to you again, but unfortunately I stepped over the same problem again. Changing the scaling technique worked fine, however, it does only work for scaleModSizes < 59. If the scaleModSize is exactly 59 it still fails (verified it for FIXEDAUTO and FLEXIBLEAUTO) for i > 31.
I did not check all sizes and all i but for all the other scaleModSizes it seems to work.

So for my code changing to scaleModSize=58 works fine, but I wanted to mention the issue nonetheless.

Hey again,

in my use case I have to add a value of roughly 400, so I noticed that the problem seems to occur for every scaleModSize.
For example for scaleModSize=58 it fails for values > 63 or for scaleModSize=55 a value > 512 can not be added.
I looked a bit further into it and e.g. for scaleModSize=50 it fails for values roughly > 16 000 (however, such big values should not be added probably).

So for me it seems that for every scaleModSize such a “threshold” value can be found.

Hi @lujoho

Please keep in mind that scaleModSize and firstModSize are two related parameters. firstModSize is used for decryption. If the multiplicative depth is set exactly to the depth of circuit, then at the time of decryption you will be working with a single modulus of firsModSize. Therefore, your message cannot be larger (in bits) than firstModSize - scaleModSize (roughly). When you increase scaleModSize (closer to firstModulusSize), you decrease the room for the message. Most likely, you are getting into this issue (it is a question of using CKKS correctly rather than a bug in OpenFHE).

There is an easy way to confirm this. Just increase the multiplicative depth by 1 and see if the issue goes away. If you do this, the message “budget” in bits will increase to firstModSize, i.e., scaleModSize + firstModSize - scaleModSize.

Everything I described here is expected by CKKS design.

Also note that firstModSize = 60 by default.

Hi @ypolyakov

thanks for your answer and explanation.
I am not quite sure if I understood you correctly. I upped the depth by one (or other values) but the issue does not go away.
For me it seems to be independent from the multiplicative depth.
As I have a rough understanding of CKKS I selected a depth that is slightly higher than the circuit depth.

However, the issue can still be replicated with the MWE from my first post, because for depth = 20, scale = 58 adding 1 + 64 results in 1.

Sorry if I am too dumb and do not see the obvious problem here…

Hi @lujoho

It looks you ran into another bug. The high-level story is that when you set scale mod size to 58, you end up adding 2^{58} * value to the ciphertext. As soon value hits 64, you get 2^{58} 2^6 = 2^{64}, which overflows for 64-bit numbers. I’ve added an issue for this: Adding a large scalar does not work in CKKS · Issue #393 · openfheorg/openfhe-development · GitHub

Please note that this scenario is not very practical for CKKS (this is why you ran into two bugs). One only needs a large scaling factor in scenarios with CKKS bootstrapping. When using CKKS bootstrapping, the message should be normalized to something like [-1,1]. Otherwise, the modular reduction approximation in CKKS bootstrapping will become inaccurate. See Questions on CKKS bootstrapping with some computations after bootstrapping - #6 by wupengfei for a more detailed discussion on this.

When one does not use CKKS bootstrapping, a much smaller scaling factor will be sufficient. So this issue is unlikely to come up.

With all that said, both are real bugs, and we will address in the next bugfix version (v1.0.4). Thank you for reporting these bugs.

Hi @ypolyakov

thanks for your confirmation of the bug.
We highly appreciate your explanation and practical tips!
It is just an intermediate result that is higher than 1 the next step after it is to scale it down again, so when bootstrapping is applied we are always in [-1,1].
Of course this operation is very costly in terms of precision but without such large additions our algorithm does not work unfortunately.

v1.0.4 (with the bugfix) is now released.