Source of noise and noise growth in RNS-CKKS

Hello everyone,

I am currently working with OpenFHE’s implementation of CKKS with bootstrapping. My use case involves running hundreds, if not thousands, of iterations and bootstrapping operations. As a result, I’m encountering significant challenges with precision, likely due to noise blowup.

The plot below shows the estimated precision of the bootstrapped ciphertext over multiple iterations. The precision remains relatively stable initially but begins to degrade dramatically after a certain iteration.

To understand why this is happening, I am reading your very insightful paper on RE-CKKS-DE. I have a few questions to check whether my understanding is correct.

Disclaimer: I know that there are many sources of approximations in CKKS. Given that the algorithm I am using (Gradient Descent) is numerically stable, I am not interested in approximation errors caused by encoding or the approximate nature of CKKS. Instead, I am concerned only about the noise growth.

  • In Table 1, the last two columns correspond respectively to OpenFHE’s FIXEDAUTO and FLEXIBLEAUTOEXT?
  • It seems to me that in RNS-CKKS the sources of noise are only the encryption phase and the key switching operation. With FLEXIBLEAUTOEXT, both becomes negligible. However, does this mean that they are not entirely absent and, therefore, over thousands of iterations this noise can accumulate and impact precision?
  • Given that rotations are automorphisms and that key switching noise is negligible in FLEXIBLEAUTOEXT, is it correct to say that rotations do not add any additional noise to the ciphertext?
  • Are cross-level additions and multiplications more expensive (again, only in terms of noise growth) than regular additions and multiplications? Since cross-level additions require an adjustment operation (involving multiplication by a constant), does this increase the noise more than a regular addition would?

Thanks in advance for your support and sorry for so many questions. If you can answer even just one, feel free to do it.

Have you considered examining the statistical distribution of the values between each iteration? My number one guess would be that their variance grows, increasing both the compounding noise growth after each basic operation but also decreasing the bootstrapping minimum precision due to how the EvalMod step behaves the further the values are of the origin (in such case either compressing the values before a bootstrapping, using multiple iterations bootstrapping or composing with arcsine will help).

Now that I think about it, it might also be the imaginary part that blows up.

Hi Jean,
Thank you for your insights. I’m currently monitoring various metrics, including min, max, mean, and std of the values. Typically, both the mean and standard deviation increase with each iteration. However, I’ve found a working method to force them to be decreasing without altering the computations. IHowver in both cases the negative trend in precision remains consistent. When mean and std are decreasing, it just takes slightly longer for the precision to reach zero.

composing with arcsine might help

Interesting. I had never heard of this. Could you provide some references?

it might also be the imaginary part that blows up

Even more interesting. I will monitor the imaginary component tomorrow and share my findings. That said, do I have any control over the imaginary part of the values? As far as I know, OpenFHE does not support complex numbers in its implementation.

composing with arcsine might help

The EvalMod step of the bootstrapping, is usually evaluated by approximating \frac{1}{2\pi}\sin(2\pi x) \approx x \mod 1, which is only tight if is x is close to the origin (typically in the range [-1/256, 1/256]). By having larger values, the approximation loses precision quickly. But it can be compensated by instead evaluating \frac{1}{2\pi}\arcsin(\sin(2\pi x)), which is exactly equal to x \mod 1, \forall x = k + [-0.25, 0.25] with k \in \mathbb{Z}.

it might also be the imaginary part that blows up

As far as I know, OpenFHE disabled the ways to interacts with the imaginary part because the it is used to estimate the noise of the real part during decryption.

But it can be compensated by instead evaluating \frac{1}{2\pi}\arcsin({\sin{(2\pi x))}}

Then the next question is how to do that in OpenFHE. Anyway, it might be a problem of range since \frac{1}{256} \approx 0.0039 and the mean value of my ciphertexts is about 0.05359. I wonder if I can just multiply the ciphertexts by like 0.001 before bootstrapping, bootstrap them, and then rescale back with another multiplication. I will test it and post the results.

As far as I know, OpenFHE disabled the ways to interacts with the imaginary part because the it is used to estimate the noise of the real part during decryption.

I managed to read the imaginary part using GetCKKSPackedValue(), which returns a complex number. However, the imaginary part of all values stays constant and equal to zero for all the iterations. It might be because the interactions with the imaginary part are actually disabled.

Hi Alberto, a few comments:

  • First, what is the precision you are plotting? Your own computation based on the plaintext result and the decrypted result, or what OpenFHE is returning when decrypting a CKKS ciphertext? For the latter, this is roughly its meaning Meaning of the bit precision.
  • Some sanity checks on the application you are running. If you do in the clear the first 400 iterations, say, and then run another 800 iterations on ciphertexts, do you see the same drop in precision after 400 more iterations as in your original graph? Or is the trend different?
  • A comment about loss of precision in CKKS bootstrapping CKKS Bootstrapping - #7 by ypolyakov.
  • If your values magnitude change drastically with the iterations, then change in precision is expected. Recall that for CKKS bootstrapping to work well, it is recommended that your inputs are between [-1, 1] and actually even smaller. To this end, yes, it would be good if you can scale the input down before bootstrapping and then back up. This is done to some extent automatically in CKKS boostrapping, but you can also add a separate scaling.
  • To get a better precision in the CKKS bootstrapping, you can use a better modular reduction approximation https://eprint.iacr.org/2020/552.pdf, https://eprint.iacr.org/2020/1203.pdf. Currently, the modular reduction is approximated by 1/(2\pi) \sin(2\pi x). As suggested above, you can improve this by homomorphically composing with the arcsine, or by using a different modular approximation. However, to do this, you will have to internally modify the CKKS bootstrapping procedure.
  • The use of the imaginary part can be re-added to OpenFHE Can I use imaginary part of plaintext in CKKS? - #6 by andreea.alexandru. But if you do so, you won’t have access to the precision output when decrypting.

Hi Andreea, thank you very much for all the inputs.

First, what is the precision you are plotting?

I am plotting the result of the GetLogPrecision() method in OpenFHE. From the post you linked, it is not clear to me if this function also considers the noise introduced by the scheme. I have also tried plotting the difference between my computations on plaintexts and on ciphertexts. The trend is the same: logarithmic increase initially and then an exponential blow-up around step 400.

If you do in the clear the first 400 iterations, say, and then run another 800 iterations on ciphertexts…

Good idea. I will let you know.

Recall that for CKKS bootstrapping to work well, it is recommended that your inputs are between [-1, 1]

I am certain that all the bootstrapped values are well within that range and all the values are quite small (mean of the absolute value is about 0.05359). In fact, I tried rescaling by 0.001 before bootstrapping and then by 1000 after bootstrapping. That did not help, and instead actually consumed the precision faster, likely due to the increased multiplicative depth. Note that I am already using iterative bootstrapping and also FLEXIBLEAUTOEXT with the default moduli.

I believe this is an issue of noise growth. Let me describe my use case in the simplest way possible. Say that I have three ciphertexts x_t, a_t and b_t which undergo the following operations:

  • y_t \leftarrow x_t \cdot a_t
  • z_t \leftarrow y_t \cdot b_t
  • b_{t+1} \leftarrow \text{Bootstrap}(b_t + y_t)
  • a_{t+1} \leftarrow \text{Bootstrap}(a_t + x_t \cdot z_t)

Noise accumulates at each iteration in a_t and b_t. This noise is never removed as CKKS bootstrapping cannot do that. Spefically, the noise in a_t is transferred to y_t and is then multiplied by the noise in b_t. The same happens for z_t. I believe that in this setting, the noise growth is not negligible anymore due to the multiplicative interaction between noisy ciphertexts. What do you think?

Thank you for your time and help.

If you do in the clear the first 400 iterations, say, and then run another 800 iterations on ciphertexts…

Done. I had to change some parameters of the experiments and therefore you will see slightly different plots than the one at the beginning of the post, but the gist is the same.

This is the precision plot with all the steps performed in ciphertext.

This is when the first 120 steps are done in plaintext in standard float32 and the remaining ones in ciphertext (note that the plot starts with step 120).

They are basically the same. The first one performed 494 steps, and the second one 584 (there was enough precision for a few more, but the PC crashed).

But how are x_{t+1} and y_{t+1} updated? Because it seems they are not refreshed, and indeed noise could be accumulating there.

I forgot to mention that they are not updated. You can assume x_t is a randomly generated freshly encrypted ciphertext which changes at each iteration. Then, y_t is overwritten every time with the result of x_t \cdot a_t.

Hi everyone,

I finally figured out what was causing the issue: it had to do with the packing technique I am using. In short, the packing involves duplicating some values from the original vector, e.g., pt = [x_1, x_2, x_3], into several ciphertexts like ct_1 = [x_1, x_1, x_1], ct_2 = [x_2, x_2, x_2], \dots, and so on.

For the algorithm to work properly, these values need to stay equal. But due to the noise and approximation errors in CKKS, they started to diverge, so I ended up with something of the form ct_1 = [x_1 + e_1, x_1 + e_2, x_1 + e_3] where e_1, e_2, e_3 are small, different errors. This difference is negligible in the first iterations, but after several iterations, it becomes significant and affects the correctness of the results (which explains the linear drop I was seeing in my precision plot).

I fixed it by using binary masks and rotations to keep these values in sync, resetting them based on the first one. Everything works fine now! So the issue was indeed noise growth, but not just because of the interaction between bootstrapped ciphertexts.

Thanks for all your help!
Alberto