Hi guys,
I am currently going through implementations of BFV multiplication in OpenFHE. It seems OpenFHE implemented an improved HPS variant that is described in “Revisiting Homomorphic Encryption Schemes for Finite Fields”.
When it comes to ScaleDown the ciphertext ct_tensor, it seems OpenFHE would execute an exact BaseConversion, which requires multiplication by some pre-computed values, including the integral part and fractional part(shown below)
So, my question is, can this sort of exact scaling down be replaced by approximate scaling down, just like BEHZ? Of course, approximate scaling down by t/P would introduce negligible error, which would be bounded by k (number of moduli) after multiplying P^-1.
Thank you for the question. It is possible that the extra noise introduced by approximate scaling would be negligible in most practical cases (I would need to check it more carefully though). At a high level, we chose HPS as the basis for the improved BFV multiplication because HPS multiplication is simpler and slightly faster than BEHZ. We kept the floating-point operation in the scaling because its computational complexity is very small, as compared to the rest of computations in the scaling. Since the floating-point arithmetic is already required for other steps in BFV multiplication, there is no benefit in trying not to use it here. In summary, the performance benefit gained from this approximation is close to being negligible; so we decided not to overcomplicate the noise analysis (and noise expressions).
I looked at it more carefully. Dropping the fractional term in ScaleDown seems to be justified but the overall performance gain from this will probably be close to negligible.
Agreed, when float point operation is available, HPS with optimization is superior to BEHZ both in performance and noise growth. However, I am currently investigating implementing the HPS-BFV scheme in OpenFHE with machines that only support integer operation, as BEHZ is more complex to implement (requiring small Montgomery reduction with additional special prime to correct FastBaseConversion).
When it comes to using fixed point operation to implement HPS-BFV, exact BaseConversion is rather simple, and the precision loss due to the limited precision of fixed point arithmetic is negligible. But when it comes to exact SacleDown, it became a problem. If we follow the original ScaleDown in HPS, the number of simulated float point operations required would go up from (k+1) to k(k+1). So, I was trying to replace that exact ScaleDown in HPS-BFV with approximate scaling down, just like BEHZ.
Thank you for your advice. I do think that kind of substitution is applicable, as the two RNS variants of BFV are so similar. And the noise introduced is a constant k for every multiplication, which is negligible compared to the inherent noise increase in multiplication.
Hi,
The term Ct_{tensor} is in mod QP.
The procedure ScaleDown with t/p will reduce it to mod Q. How ??
Moreover, CT_{tensor} shows the multiplication of two ciphertexts (say ct_{1} and ct_{2}).
Then, CT_{tensor} should also include QP/t [m_{1}m_{2}] + P/t v_{tensor} + QP/t k_{tensor} + e_{round} (ct_{1})
I suggest creating a new topic rather than appending your questions to a prior topic that has an accepted solution. Also, please describe your questions in more detail, referring to either specific equations in papers or code lines in OpenFHE.