Guidelines for 128-bits secure deep circuits

Hello, another question here :smiley:

I wrote a Neural Network test circuit based on CKKS in the HEStd_NotSet security environment, now I am trying to secure it by switching to HEStd_128_classic.

Nevertheless, I am stuck at parameters selection.

I am using

  • N=2^{16}
  • (firstMod) scale \Delta = 2^{60}
  • (dcrtBits) intermediate primes q_i = 2^{59}
  • Level Budget: \{3, 3\}.
  • HKS d_{num} = 3

I can’t reduce the scale or the bits for the primes because the bootstrapping has a really poor performance with smaller values (my ciphertexts contain values \in (-1, 1)). This setting allows me to use only 2 levels before bootstrapping, which makes almost every application impractical.

I tried to increase d_{num} for the HKS procedure, but this helps just a little bit.

Nevertheless, I found that most of papers describing NN based on HE use ring dimension N=2^{16}, I was wondering if you have any advice or tip, because my only idea is to use N=2^{17}, but this would need many resources (my laptop is already exhausted at N=2^{16} :frowning: ).

Thank you again!!

Most of the papers are based on sparse ternary secret distribution. Uniform ternary distribution is used as the default in OpenFHE (as it is included in the standard). You can switch to sparse distribution to save some levels (3 or so).

Also, you can experiment with the number of digits in hybrid key switching (smaller number leads to a smaller auxiliary modulus on top of regular log q).

You can also reduce both scaling factor and first modulus to decrease log q.

When using bootstrapping, you can also use double-precision CKKS bootstrapping (iterative CKKS bootstrapping). This will give you more precision at the expense of doubling the bootstrapping time.

When applying all these optimizations, N = 2^{16} can be achieved, while still achieving 128 bits of security.

1 Like

What is the difference between sparse and uniform ternary? I noticed the first have a much better precision in bootstrapping

Sparse secrets are considered “less standard”: the parameters for them are less stable, and there are attacks specific to sparse secrets. This is why they were not included in the standard. It is true that the sparse secret settings provide lower computational complexity and higher precision (as both depend on the norm of the secret key, which is smaller in this case). Most research papers use this setting because of its higher efficiency.