[CKKS] Relations between bootstrapping, noise and precision

Hi there, I have a few questions regarding the bootstrapping process in the CKKS scheme. From my understanding of the bootstrapping operation in schemes such as TFHE, its goal is to refresh the ciphertext by homomorphically decrypting it, resulting in a new ciphertext with a smaller noise level than the original one.

However, I am uncertain if CKKS bootstrapping operates in the same manner. Specifically, does CKKS bootstrapping reduce the noise level too, or is it intended just to restore the number of available multiplications? In my use case (I will keep the description high-level for broader applicability, if you need more details just let me know), I perform several operations (additions, multiplications, and rotations) on a ciphertext, followed by bootstrapping. This process is then repeated in a loop on the same ciphertext.

I was expecting to be able to repeat this indefinitely. However, I see that the precision drops significantly after the first bootstrapping (as expected), but it also continues to degrade with each iteration. Eventually, the precision becomes too low, leading to an exception. This suggests that CKKS bootstrapping may not reduce the noise, unlike in TFHE, and consequently, it is not possible to compute circuits of an arbitrary depth with this scheme.

Additionally, I am trying to understand the relationship between precision and noise in CKKS within OpenFHE. I am unclear about what the value returned by the GetLogPrecision() method specifically indicates. If after a few iterations the precision decreases, is it an indicator that more noise has been introduced or did it just decrease due to the approximate nature of the scheme?

Thank you for your assistance!

1 Like

CKKS bootstrapping does not completely remove the noise. The noise becomes part of the message during encryption! Think of it as you have one big number in which the message lives in the MSBs and the error lives in the LSBs and they cannot be separated.

You can use iterative bootstrapping to increase the precision of bootstrapping or use CKKS with NATIVE_SIZE = 128 bit.

There are OpenFHE examples that show how to do that:

  1. Iterative bootstrapping
  2. openfhe-development/src/pke/examples/advanced-real-numbers-128.cpp at main · openfheorg/openfhe-development · GitHub

GetLogPrecision() returns the remaining number of MSB’s used for the message. Note that the error grows to the left as we compute.

One final note, numerically stable algorithms are expected to run under CKKS without major issues. Make sure you scale down after every multiplication. Ensure your data does not blow up. I worked on examples of more than a few 100s iterations over a CKKS ciphertext and faced no precision issues. You can refer to page 14 in the CKKS paper for more info on the precision loss in CKKS.

Hi Caesar, thank you for your response. Your explanation has helped, but I have a few additional questions that I hope you can clarify.

CKKS bootstrapping does not completely remove the noise.

Could you please elaborate on this point? Specifically, does CKKS bootstrapping actually reduce noise? And if so, to what extent? In my simulations, I have not observed any cases where the precision, as measured by GetLogPrecision(), increased after bootstrapping. It consistently remains the same or decreases, which leads me to question whether noise reduction is happening at all during the bootstrapping process.

You can use iterative bootstrapping to increase the precision of bootstrapping or use CKKS with NATIVE_SIZE = 128 bit.

I’ve tried both approaches. Iterative bootstrapping indeed provides a significant boost in precision, effectively doubling it. However, using a 128-bit has only given a few more bits of precision, which does not justify the additional computational costs.

Make sure you scale down after every multiplication.

Are you referring to the rescaling operation in CKKS? I’m currently using the FLEXIBLEAUTOEXT mode, which should automatically handle rescaling after each multiplication.

I worked on examples of more than a few 100s iterations over a CKKS ciphertext and faced no precision issues.

If you are referring to the encrypted logistic regression training example, I’m working on a very similar application: training a Multi-Layer Perceptron neural network. The algorithm is stable, but as you can imagine MLP training involves a significantly higher number of operations per iteration. Could this be the reason why you didn’t encounter noticeable precision issues, while I’m experiencing them in mine?

I recommend checking Figure 2 of the CKKS paper. The error is part of the message. After multiplying two messages (\Delta m_1+e_1) *(\Delta m_2+e_2) you get: \Delta^2 m_1 m_2 + \Delta m_1 e_2 + \Delta m_2 e_1 + e_1e_2. When we scale down by \Delta (or more precisely by something very close to \Delta), we get the product (\Delta^2 m_1 m_2) with a canonical scale (\Delta) instead of (\Delta^2), and the multiplicatively expanded noise (e_1e_2) is reduced by a factor (1/\Delta) which is similar to removing LSBs of the product. This is very similar to fixed-point arithmetic.

The noise is there and it will remain there forever, even decryption won’t remove it. So, to answer your question, CKKS bootstrapping does not remove the noise, it recharges your compute budget to multiply further.

Precision will keep falling similar to any fixed-point (or maybe even floating-point) computation.

FLEXIBLEAUTOEXT should give the highest precision, and yes it does rescale automatically.

Make sure in your computation, that when you evaluate non-linear evaluation functions via polynomial approximations, all input values are within the range of approximation, or your message will blow up and you will face precision issues.

Thanks again for your response. I have just a few final (I promise) clarifications.

Precision will keep falling similar to any fixed-point (or maybe even floating-point) computation.

Does this mean that in CKKS, the loss of precision as measured by OpenFHE does not have anything to do with the approximate nature of the scheme, but it is only related to noise?

Make sure in your computation, that when you evaluate non-linear evaluation functions via polynomial approximations, all input values are within the range of approximation, or your message will blow up and you will face precision issues.

I am not using any polynomial approximations. However, are you implying that larger values will consume more precision (or better, will contain more noise) w.r.t. smaller values even when computed through the same operations?

Pretty much yes. You have a limited space of precision for the message in the MSBs. As long as the noise does not grow much to the significant part of the message then you are good, but if it blows up, you will lose the significant part of the message.

And Yes for the second question. Refer to the multiplication formula above, for the terms \Delta m_1e_2+\Delta m_2e_1 which are also part of the added noise.