I saw a couple of threads talking about generating infinite circuits in CKKS thanks to bootstrapping. But that is not true, is it? Eventually the approximation error will make the ciphertext to noisy, or is there also a way to combat that?

I am unaware of any practical floating-point standard that can give you infinite-depth arithmetic circuits on plaintext data. I do not think that can be practically done in CKKS, but you can reach large depths due to several recent advancements such as iterative CKKS bootstrapping and high-precision implementations of CKKS (128-bit for example), both implemented in OpenFHE.

As a practical example of what @Caesar is referring to, look at the logreg training implementation at GitHub - openfheorg/openfhe-logreg-training-examples: Examples of Logistic Regression Training using Nesterov Accelerated Gradient Descent This example executes a circuit of depth 2,800, i.e., 200 iterations of logreg training, with one iteration requiring 14 levels. You can see the precision is still very good after this. Note that many machine-learning applications are robust to approximation errors (because they compute statistical averages and the like). So CKKS precision is not an issue in practice, just like it is not an issue when small floating-point precision is used in GPU setups for computations in the clear.

Any numerically stable and iterative circuit that converges to some value fast enough will work well with CKKS because they have a natural self correcting error mechanism. Logistic regression training is one such example. Another would be Newton’s iteration method for finding inverses or square roots. You can run them infinitely and the noise won’t get above a certain threshold, which is tied to the rate of convergence. The encrypted value will oscillate around the point of convergence.