Scaling Chebyshev input near to zero in [-1, 1]

I am working with a Chebyshev approximation for the tanh function, where the input is limited to the range of [-1, 1] over homomorphic encryption. However, it’s important to note that the input of the original tanh function spans from negative infinity to positive infinity. In the tanh function, we typically scale the input to obtain an output of either 1 or -1. However, with the bounded input range of the Chebyshev approximation between -1 and 1, it becomes challenging to achieve the desired outputs of 1 or -1 since we cannot easily scale the input without overflowing the original domain of [-1;1], where Chebyshev approximation becomes unstable.

You should set an interval based on experimental observations. For instance, if you are evaluating a neural network, the inputs of tanh are likely to be within a certain interval (small or large depends on what’s before)

1 Like

Thank you so much for response.
Noted and you can close it for now.