Hello everyone,
I have written benchmarking logic that performs various calculations based on the same dataset, and these calculations are always instantiated with the same CryptoContext. However, I have noticed something that I cannot yet explain: calculations with simple multiplicative depths yield better precision. As far as I understand, values that are smaller yield better precision. This means that -10 bits is better than -3 bits in terms of precision. A concrete example is as follows for the calculation of variance and standard deviation. Although the multiplicative depth for the standard deviation is much higher due to the additional square root expression, which is approximated using Chebyshev polynomials. How can I understand it ?
How can this be explained?
Results of **variance** calculation
Expected result: 91.2262 93.1295 86.2412 95.9512 63.7289 81.6778 101.5013 72.2847 109.5454 103.1739 74.9708 112.4619 99.3997 124.7175 86.0879 103.9474 107.4201 103.0571 106.1821 125.5792 105.9413
Actual result: 91.2259 93.1292 86.2409 95.9509 63.7287 81.6775 101.5010 72.2844 109.5450 103.1736 74.9706 112.4615 99.3993 124.7171 86.0877 103.9471 107.4198 103.0568 106.1818 125.5787 105.9409
Precision: -12.0000 bits
Results of **standard deviation** calculation
Expected result: 9.5512 9.6504 9.2866 9.7955 7.9830 9.0376 10.0748 8.5020 10.4664 10.1575 8.6586 10.6048 9.9699 11.1677 9.2784 10.1955 10.3644 10.1517 10.3045 11.2062 10.2928
Actual result: 9.5513 9.6504 9.2867 9.7955 7.9831 9.0376 10.0748 8.5020 10.4664 10.1575 8.6586 10.6048 9.9700 11.1677 9.2784 10.1955 10.3644 10.1518 10.3045 11.2063 10.2928
Precision: -14.0000 bits