Source of noise and noise growth in RNS-CKKS

Hi Alberto, a few comments:

  • First, what is the precision you are plotting? Your own computation based on the plaintext result and the decrypted result, or what OpenFHE is returning when decrypting a CKKS ciphertext? For the latter, this is roughly its meaning Meaning of the bit precision.
  • Some sanity checks on the application you are running. If you do in the clear the first 400 iterations, say, and then run another 800 iterations on ciphertexts, do you see the same drop in precision after 400 more iterations as in your original graph? Or is the trend different?
  • A comment about loss of precision in CKKS bootstrapping CKKS Bootstrapping - #7 by ypolyakov.
  • If your values magnitude change drastically with the iterations, then change in precision is expected. Recall that for CKKS bootstrapping to work well, it is recommended that your inputs are between [-1, 1] and actually even smaller. To this end, yes, it would be good if you can scale the input down before bootstrapping and then back up. This is done to some extent automatically in CKKS boostrapping, but you can also add a separate scaling.
  • To get a better precision in the CKKS bootstrapping, you can use a better modular reduction approximation https://eprint.iacr.org/2020/552.pdf, https://eprint.iacr.org/2020/1203.pdf. Currently, the modular reduction is approximated by 1/(2\pi) \sin(2\pi x). As suggested above, you can improve this by homomorphically composing with the arcsine, or by using a different modular approximation. However, to do this, you will have to internally modify the CKKS bootstrapping procedure.
  • The use of the imaginary part can be re-added to OpenFHE Can I use imaginary part of plaintext in CKKS? - #6 by andreea.alexandru. But if you do so, you won’t have access to the precision output when decrypting.