Hello,
I have a complete beginner’s question about the meaning of bit precision on the CKKS scheme. I already read post related (like meaning of the bit precision) but I keep having doubts.
When decrypting and having a statement like “Estimated precision: xx bits.”
What I think that I understand when I encrypt with CKKS a real number, is that if I had a number (a double) with 10 bits of interger part and 20 bits of decimal part and use a scaling factor of 20, I already I’m loosing 10 bits of precision. Getting as best a number with 10 bits of integer part and 10 bits of decimal part, Right?
In this case, the answer of the Estimated precision will be 20 bits?
So ideally, if I have only one element encrypted, the estimated precision are the total correct bits of the mantissa of the representation of the decrypted result?
If all this is correct, can someone expand more of what Yuriy said at the post that I mentioned:
Roughly it corresponds to the [scaling factor] − [log2 of the average L1 norm of the difference between approximate FHE result and floating-point result in the clear]
Thank you for your help and sorry if is a really beginner question.