Mathematical Explanation for Varying Runtimes and Precision at 256-bit Security Levels in Threshold FHE

image

The Y-axis represents the runtime and the X-axis represents the precision in bits.

I have written a program that encrypts data to a predefined security level for 128, 196, 256-bit security. I have extracted the data for various parameter variations as shown in the screenshot; I can achieve 256-bit security with an approximate precision of -20 bits at half the runtime with a ring dimension of 65k. I am looking for a possible mathematical explanation for this occurrence.

The parameters specified by me in the CryptoContext for Threshold FHE are for one entry; here is an example for a runtime of 337832.6667 ms and a precision of -18 bits.
Scaling Technique [-] = FIXEDAUTO
Security Level [-] = 256bit
Parties [-] = 4
Scale Mod Size [bit] = 40
First Mod Size [bit] = 45
Compression Level [-] = 2
Batch Size = 32

The calculated values from the library are as follows:

Ring Dimension [bit] = 65k
Log2(q) [-] = 605

Hi @moghit02,

I am not sure I fully understand the question. Your graph implies you varied the scale mod size as well as the security level (the range for scale mod size was not provided in the message itself; I am just implying it from the increased precision in the graph). What do you mean by, I can achieve 256-bit security with an approximate precision of -20 bits at half the runtime with a ring dimension of 65k? Are you saying that for a smaller scale mod size, you have a higher runtime? Could you clarify what you are asking, and also please provide the inputs you are using and the ring dimensions that OpenFHE selects. Also, what is the multiplicative depth for your experiment?

Hello @ypolyakov, you have correctly inferred. I have used different scaling modes, namely (40, 50, 59), and the multiplication depth is 14. For the scaling mode size 40 and FIXEDAUTO, I achieve the desired result in terms of performance and precision, which are sufficient for me. These results are located in the yellow range on the X-axis (the points between -15 and -20 bits precision) and show a significantly shortened runtime, which I attribute to the halved ring dimension, where the corresponding ( \og_2(q) = 605 . What I am lacking is a clear interpretation of how it is possible to reduce the ring dimension or from when it is reduced while maintaining a 256-bit security level at the desired precision. I am looking for a method to always achieve maximum security with minimal runtime and middle precision. My question is directed towards that. I hope my concern is now clear.

The logic is very similar to what is described in In which scenario does the decrease in Q result in reduced ring dimension? It is all driven by the security tables.