How to optimally configure the OpenFHE backend in Google Transpiler

Sections 5.2 and 5.3 of describe the optimal configuration of OpenFHE to achieve the smallest bootstrapping runtime for the STD128 setting (classical computer attacks). These include using a recent clang compiler and setting the following CMake flags: NATIVE_SIZE=32 and WITH_NATIVEOPT=ON. How can this configuration be set in the Google Transpiler?

The transpiler uses Bazel. The script for controlling how openFHE is built is in

in there are three sections named core, pke, binfhe

if you change the cache_entries and defines to the following you can select what you want:

    name = "core",
    cache_entries = {
        "CMAKE_BUILD_TYPE": "Release",
        "CMAKE_C_COMPILER": "clang-10",
        "CMAKE_CXX_COMPILER": "clang++-10",
        "WITH_NATIVEOPT": "ON",
        "NATIVE_SIZE": "32",
defines = ["MATHBACKEND=2"],

remember you have to change the settings for all three sections of the BUILD file.

Note: the define section is required to get the system to compile, it is not used in binfhe code at all. This is a quirk with Bazel, as we usually have it defined as part of the cmake directives and NOT as a c++ compiler directive.

Note this only works for TOY and STD128 .
I measured a 2% improvement in AP and 17% improvement in GINX using the above. I did not have code for STD128 so I could not benchmark it, but I expect similar results (based on the NATIVE_SIZE=64 tests shown below).

As all the work I do is using the Post Quantum settings I was able to measure the improvement with the above settings and NATIVE_SIZE=64. I get an average 7% improvement for AP and 11% improvement for GINX across TOY, STD128Q_OPT, STD192Q_OPT, and STD256Q_OPT

All these improvements are vs the out of the box settings provided in the transpiler repo.


Thank you @dcousins I would like to point out that the performance speed-up for STD128 should be more significant at NATIVE_SIZE=32 as compared to NATIVE_SIZE=64 (30% or even more, depending on the environment). At least this is what I observed when running experiments in OpenFHE itself.

update on this, OpenFHE optimized the binfhe code a bit, so I will redo these numbers in a week or so when the new release has been dropped, and Google has updated their repo with the new release.

also the improvement for 32bit nativeopt=on clang++ is 10% for TOY AP and 15% TOY GINX over the Google defaults.