Different results for different batch sizes CKKS

Hi everyone,

I’m programming a neural network with a multiplicative depth of 15 to 18 (depending on the degree of the polynomial).

I’m making several predictions at the same time using batching, and I’m using Chebyshev’s polynomial approximation.

I initially used a batching of 8192.
I have tested different degrees of polynomials. With polynomials of degree 3 (for a multiplicative depth of 15), and degree 7 (for a multiplicative depth of 18), everything goes well with this batching.

When I tested with polynomials of degree 13, keeping the depth of 18 (calculated according to this document, and which is equivalent to that of polynomials of degree 7), I get an error: “Approximation error is too high”.

I then tried to reduce the batching several times, 4096 doesn’t work, 2048 doesn’t work and 1024 works but the results are very (too) widely wrong (should be around 0, and the result is above 10^3).

By reducing the batching, this error gradually disappears. However, I’d like to make as many predictions as possible in a single iteration, as this takes a lot of time. Do you have any solutions? Is this normal?

I’m using a modSize of 50. And I’m running OpenFHE on Windows 11.

It is hard to help you without examining any code.
Try the following:

  1. set the firstModSize to 60 and scalingModSize to 59
  2. increase the multiplicative depth by 1 or 2
  3. change the scalingTechnique to FLEXIBLEAUTOEXT

This might help resolve the problem.

Hello, thank you for your reply @Caesar !

Unfortunately, none of these solutions work:

  1. “set the firstModSize to 60 and scalingModSize to 59”: doesn’t work. After the first large calculation, the programme crashes with the error: “terminate called recursively”.
  2. Increasing the multiplication depth works a little, I can now perform the calculation with a batch size of 1024 with a multiplication depth of 24, but my computer can’t handle more than that. Unfortunately, the results are still very much influenced by noise.
  3. The FLEXIBLEAUTOEXT technique hasn’t changed anything

Is there a relationship between the size of the batch and the noise generated during operations?

Also, my code is way too long to show here :confused:

I would try setting the modSize to something like 40 or add an extra level to the multiplicative depth. I would keep the value of FirstModSize at 60 (default). My hypothesis is that you may get very high values. The difference between firstModSize and modSize determines how big values can be (up to 2^20 or so in this case).

Also, are you doing scaling to [-1,1] for Chebyshev interpolation yourself or you provide a range?