How should the outputs of the Precision Estimator be understood?

Hello everyone,

I have written benchmarking logic that performs various calculations based on the same dataset, and these calculations are always instantiated with the same CryptoContext. However, I have noticed something that I cannot yet explain: calculations with simple multiplicative depths yield better precision. As far as I understand, values that are smaller yield better precision. This means that -10 bits is better than -3 bits in terms of precision. A concrete example is as follows for the calculation of variance and standard deviation. Although the multiplicative depth for the standard deviation is much higher due to the additional square root expression, which is approximated using Chebyshev polynomials. How can I understand it ?

How can this be explained?

Results of **variance** calculation
        Expected result:        91.2262 93.1295 86.2412 95.9512 63.7289 81.6778 101.5013 72.2847 109.5454 103.1739 74.9708 112.4619 99.3997 124.7175 86.0879 103.9474 107.4201 103.0571 106.1821 125.5792 105.9413 
        Actual result:          91.2259 93.1292 86.2409 95.9509 63.7287 81.6775 101.5010 72.2844 109.5450 103.1736 74.9706 112.4615 99.3993 124.7171 86.0877 103.9471 107.4198 103.0568 106.1818 125.5787 105.9409 
        Precision: -12.0000 bits

Results of **standard deviation** calculation
        Expected result:        9.5512 9.6504 9.2866 9.7955 7.9830 9.0376 10.0748 8.5020 10.4664 10.1575 8.6586 10.6048 9.9699 11.1677 9.2784 10.1955 10.3644 10.1517 10.3045 11.2062 10.2928 
        Actual result:          9.5513 9.6504 9.2867 9.7955 7.9831 9.0376 10.0748 8.5020 10.4664 10.1575 8.6586 10.6048 9.9700 11.1677 9.2784 10.1955 10.3644 10.1518 10.3045 11.2063 10.2928 
        Precision: -14.0000 bits

Could you clarify what you mean by this statement?

In general, the higher is the depth, the lower is the precision. Please look at Approximate Homomorphic Encryption with Reduced Approximation Error for more details on how CKKS approximation error affects the precision.

Based on the example, you can see that the standard deviation has higher precision (-14 bits) compared to the variance (-12 bits). This should be the case if I understand the logarithmic transformations correctly. What I don’t understand is that the standard deviation has a greater multiplicative depth because the root is approximated using the Chebyshev approximation, and the polynomial degree is 150. Therefore, it is unclear to me why the precision values are as they are. Thank you in advance.

CKKS uses fixed-precision arithmetic. In other words, the real numbers are multiplied by the scaling factor and then rounded to integers. The approximation noise is added to the integers during encryption. During multiplication, the existing approximation error is multiplied by the magnitude of the message for the other multiplicand. The paper I referred to explains this in detail.

The precision (I assume you are using the OpenFHE estimate) shows the precision after the decimal point (with the scaling factor corresponding to 1.0). Since the variance is larger in magnitude (~92) than the standard deviation (~9.5), its decimal precision is worse than for the standard deviation.