Division approximation for large interval

Hi all,

I am facing the challenge of trying to approximate a division where the denominator can have a huge range (anything between 0.00001 and 1). Unfortunately I can’t avoid this. Of course I know that the EvalDivide() function sin’t supposed to be used on values smaller than 1. Therefore I am trying to scale up the values leading to the interval (1,100_000). I am aware that this is a huge interval, but with a large polynomial degree of 2031 I get decent precision.

My problem starts when I then try to scale the result back down which requires multiplying the result by 100000. After this multiplication, the ciphertext precision becomes too low and decryption is no longer possible, I assume because the large factor of 100000 scales the error too much.

Are you aware of any strategies to avoid these problems for such large intervalls? Or is this just something that’s not practically possible?

Thanks already!
Best regards, Karl

It would be helpful to provide more context here. What are you trying to do that requires this division?
There might be a different algorithm that is more FHE-friendly for computing your task.

Thanks for your reply! I am trying to calculate the winding number of a polygon around a point using the formula to determine if the point is inside:

The large interval comes from the fact that these points can be very close to the polygon borders, but the algorithm should still work. While this formula requires a bunch of functions that need to be approximated, all the other methods require multiple comparisons, which tend to be even more expensive.

Please provide a minimum working example to test on our side. This might be a configuration issue. My suspicions are in the scale mode size and multiplicative depth parameters.

On another note, and just out of curiosity, have you looked at this method to approximate \arccos of a ratio?
At first sight, It looks to me more numerically stable.

Thank you for the suggestion for approximating arccos of a ratio! It looks very interesting and I will take a closer look at it, but on first glance the fact that every iteration requires a square root might cause new problems.

As for your other point, below you can find an MWE in Python. I ran it on Ubuntu with openfhe v1.3.1.0.24.4. As you can see from the output, I lose a lot of precision after scaling up the values before the division. After multiplying the approximated reciprocal with the numerator, the end result kind of works, but with a scale mod size of 45 I am only left with 16 bit at the end with a batch size of 1024. If the batch size is increased, the final precision drops even more, for example to 14 bits with 4096.

In this example, the input ciphertexts are fresh and start with maximum precision, but in most cases the input values are already degraded and have lower precision. In these cases, the decryption will fail after the division because of too low precision.

Please let me know your thoughts on this.

Thanks a lot!

from openfhe import *

def printCiph(name: str, ciph: Ciphertext):
    dec =  cc.Decrypt(secretKey, ciph)
    dec.SetLength(4)
    print(name, ": ", str(dec.GetFormattedValues(10)), ", Level: ", ciph.GetLevel(), ", Precision: ", dec.GetLogPrecision())

### Parameters ###

sec_level = HEStd_NotSet
scale_mod_size = 45
first_mod_size = 60
batch_size = 4096
ring_dim = batch_size * 2

level_budget = [2, 2]
available_levels = 18
key_dist = SecretKeyDist.SPARSE_TERNARY
mult_depth = available_levels + FHECKKSRNS.GetBootstrapDepth(level_budget, key_dist)

parameters = CCParamsCKKSRNS()
parameters.SetSecurityLevel(HEStd_NotSet)
parameters.SetRingDim(ring_dim)
parameters.SetSecretKeyDist(key_dist)
parameters.SetMultiplicativeDepth(mult_depth)
parameters.SetScalingModSize(scale_mod_size)
parameters.SetFirstModSize(first_mod_size)
parameters.SetBatchSize(batch_size)
parameters.SetScalingTechnique(ScalingTechnique.FLEXIBLEAUTO)

### Crypto Setup ###

cc = GenCryptoContext(parameters)
cc.Enable(PKESchemeFeature.PKE)
cc.Enable(PKESchemeFeature.KEYSWITCH)
cc.Enable(PKESchemeFeature.LEVELEDSHE)
cc.Enable(PKESchemeFeature.ADVANCEDSHE)
cc.Enable(PKESchemeFeature.FHE)

cc.EvalBootstrapSetup(level_budget, [0, 0], batch_size)
keys = cc.KeyGen()
secretKey = keys.secretKey
publicKey = keys.publicKey
cc.EvalMultKeyGen(keys.secretKey)
cc.EvalBootstrapKeyGen(keys.secretKey, batch_size)

### Encryption ###

plain_a = [0.00002]*4
plain_b = [0.00001, 0.001, 0.1, 1]
a = cc.Encrypt(publicKey, cc.MakeCKKSPackedPlaintext(plain_a*((int)(batch_size/4))))
b = cc.Encrypt(publicKey, cc.MakeCKKSPackedPlaintext(plain_b*((int)(batch_size/4))))
scale = 100_000


### Division ###

printCiph("Denominator", b)

values_sc = cc.EvalMult(b, scale)
printCiph("Scaled denom.", values_sc)

res = cc.EvalChebyshevFunction(lambda x: 1 / x, values_sc, 1, scale, 2031)
printCiph("Reciprocal", res)

denom = cc.EvalMult(res, scale)
printCiph("Rescaled reciprocal", denom)
print("\n")

result = cc.EvalMult(a, denom)
printCiph("Division result", result)

print("Plain result:", [a/b for a, b in zip(plain_a, plain_b)])

Try to increase scale_mod_size to 59. Would that provide enough precision for your application?

There are other ways to get higher precision with CKKS, but they are more involved.

By the way, to compare precision between CKKS and plaintext computation, I think it is fairer to simulate CKKS computation in plaintext, rather than just comparing final outputs. I mean, in the plaintext computation, scale your values, use standard division instead of Chebyshev approximation, and rescale back, and so on.

Thanks for your input!
I could increase the precision, although I was hoping to stay with a ring dimension of 2^16 for performance reasons.

Just to fully clarify: The part that really decreases the precision here is the multiplication by 100000. After that one multiplication I lose about 17 bits of precision. Intuitively that makes sense to me, because a multiplication by such a large amount should scale the error as well.
Am I correct with that? And is there any trick or way around it so that I can do the scaling I need for the EvalChebsyshev without losing that much precision?

Alternatively, if there is no way to circumvent this, I’ll go with the option to try and increase the starting precision.