Understanding CKKS precision limits for large magnitude values

Hello,
I am trying to understand the practical limitations on double-precision values when using CKKS.
In standard 64-bit floating-point arithmetic, we have a 52-bit mantissa. However, in CKKS, due to ciphertext representation and scaling, the effective precision is reduced.

I observed this with a simple experiment: I created a vector where each slot contains a value with 13 digits before the decimal point and 2 after (i.e. about 15 significant digits, which is close to the precision limit of 64-bit floating-point representation). After encoding, encryption, decryption, and decoding (without any homomorphic operations), I observed a maximum error of around 1e-4 per slot (using a scalingModSize of 59 bits).
I suspect the large magnitude of the values leads to this big error and that CKKS is typically used for values in [-1,1], but for research purposes I would like to better understand how to determine safe input ranges more formally.

This raises the following questions:

  1. Is CKKS equivalent to fixed-point arithmetic? If so, how is the position of the “decimal point” determined? Is it automatically set based on the input values, or does it need to be chosen explicitly through the scaling factor?
  2. How can we estimate the maximum magnitude of values that can be safely encoded in CKKS without causing overflow or large precision loss?
  3. Is there a way to predict whether a given value will lead to overflow after a sequence of operations?
  4. More generally, given a circuit g (with additions, multiplications, rotations, and possibly bootstrapping), how can we determine a bound f_max such that:
    Decode(Decrypt(g(Encrypt(Encode(f_max))))) ≈ g(f_max) without overflow or unacceptable error?

Any insights or references about this specific representation would be greatly appreciated.

To the best of my knowledge, analytical estimation of CKKS’s precision limits is still an open problem, and it is a challenging one. Most of your questions are open-ended and would need some deep research to be satisfactorily addressed. I suggest checking the literature on this.

This paper would be a good starting point:

And one last thing, empirical measurement is the best tool we have right now. Based on your input data distributions, you can run some simulations to find your application precision limits.