Evaluating BinGate on LWECiphertext after CKKS to FHEW scheme switching

Hello,

I’m trying to convert two CKKS ciphertexts into their encrypted bits (FHEW ciphertexts) using scheme switching. Extracting the bits from the converted CKKS ciphertext is working fine. But, when I perform the XOR operation between the bits, I get 00.

Here is my entire code:

 
#include "openfhe.h"
#include "binfhecontext.h"

using namespace lbcrypto;
using namespace std;

int main() {
    // =========================================================================
    // STEP 1: SETUP (CKKS + FHEW Linked)
    // =========================================================================

    // 1.1 CKKS Parameters
    uint32_t multDepth = 3;
    uint32_t firstModSize = 60;
    uint32_t scaleModSize = 50;
    uint32_t ringDim = 4096;
    uint32_t slots = 16;
    SecurityLevel sl = HEStd_NotSet;

    // 1.2 FHEW Parameters
    BINFHE_PARAMSET slBin = STD128;
    uint32_t logQ_ccLWE = 25;

    CCParams<CryptoContextCKKSRNS> parameters;
    parameters.SetMultiplicativeDepth(multDepth);
    parameters.SetFirstModSize(firstModSize);
    parameters.SetScalingModSize(scaleModSize);
    parameters.SetScalingTechnique(FLEXIBLEAUTOEXT);
    parameters.SetSecurityLevel(sl);
    parameters.SetRingDim(ringDim);
    parameters.SetBatchSize(slots);

    CryptoContext<DCRTPoly> cc = GenCryptoContext(parameters);
    cc->Enable(PKE);
    cc->Enable(KEYSWITCH);
    cc->Enable(LEVELEDSHE);
    cc->Enable(SCHEMESWITCH);

    auto keys = cc->KeyGen();

    // 1.3 Setup Scheme Switching
    SchSwchParams params;
    params.SetSecurityLevelCKKS(sl);
    params.SetSecurityLevelFHEW(slBin);
    params.SetCtxtModSizeFHEWLargePrec(logQ_ccLWE);
    params.SetNumSlotsCKKS(slots);

    auto privateKeyFHEW = cc->EvalCKKStoFHEWSetup(params);
    auto ccLWE = cc->GetBinCCForSchemeSwitch();
    cc->EvalCKKStoFHEWKeyGen(keys, privateKeyFHEW);

    // 1.4 Generate Bootstrapping Keys for EvalFunc (Bit Extraction)
    std::cout << "Generating Bootstrapping Keys..." << std::endl;
    ccLWE->BTKeyGen(privateKeyFHEW);
    std::cout << "Setup Complete.\n" << std::endl;

    // =========================================================================
    // STEP 2: PREPARE LUTS (For Bit Extraction)
    // =========================================================================

    // We are using STD128, so q=4. We can represent values 0,1,2,3.
    // This equals 2 bits: 00, 01, 10, 11.
    int p = ccLWE->GetMaxPlaintextSpace().ConvertToInt();
    cout << "p = " << p << endl;
    int numBits = 2;

    std::vector<std::vector<NativeInteger> > lut_bits;
    auto fp0 = [](NativeInteger m, NativeInteger p1) -> NativeInteger {
        return (m.ConvertToInt() >> 0) & 1;
    };
    lut_bits.push_back(ccLWE->GenerateLUTviaFunction(fp0, p));

    auto fp1 = [](NativeInteger m, NativeInteger p1) -> NativeInteger {
        return (m.ConvertToInt() >> 1) & 1;
    };
    lut_bits.push_back(ccLWE->GenerateLUTviaFunction(fp1, p));


    // =========================================================================
    // STEP 3: ENCRYPT DATA IN CKKS
    // =========================================================================

    std::vector<double> input1 = {3.0};
    uint32_t len = input1.size();

    Plaintext ptxt1 = cc->MakeCKKSPackedPlaintext(input1, 1, 0, nullptr);
    auto ct_ckks1 = cc->Encrypt(keys.publicKey, ptxt1);


    std::vector<double> input2 = {0};
    Plaintext ptxt2 = cc->MakeCKKSPackedPlaintext(input2, 1, 0, nullptr);
    auto ct_ckks2 = cc->Encrypt(keys.publicKey, ptxt2);

    // =========================================================================
    // STEP 4: SCHEME SWITCH (CKKS -> FHEW)
    // =========================================================================

    // Precompute scale factors
    //auto pLWE1 = ccLWE->GetMaxPlaintextSpace().ConvertToInt();
    auto modulus_LWE = 1 << logQ_ccLWE;
    auto beta = ccLWE->GetBeta().ConvertToInt();
    auto pLWE2 = modulus_LWE / (2 * beta);
    //double scale1 = 1.0 / pLWE1;
    double scale2 = 1.0 / pLWE2;

    cc->EvalCKKStoFHEWPrecompute(scale2);

    auto ct_fhew_vector1 = cc->EvalCKKStoFHEW(ct_ckks1, len);
    auto ct_fhew_vector2 = cc->EvalCKKStoFHEW(ct_ckks2, len);

    // =========================================================================
    // STEP 5: EXTRACT BITS
    // =========================================================================

    std::cout << "\nExtracting Bits via EvalFunc...\n";

    LWEPlaintext bitResult;

    for (uint32_t i = 0; i < ct_fhew_vector1.size(); ++i) {
        auto decomposed1 =ccLWE->EvalDecomp(ct_fhew_vector1[i]);
        auto decomposed2 =ccLWE->EvalDecomp(ct_fhew_vector2[i]);

        auto ct_small1 = decomposed1[0];
        auto ct_small2 = decomposed2[0];

        vector<int> bits1;
        vector<int> bits2;
        vector<int> res;

        for (int b = numBits - 1; b >= 0; b--) {
            // Apply LUT to extract specific bit from the valid small ciphertext
            auto ct_single_bit1 = ccLWE->EvalFunc(ct_small1, lut_bits[b]);
            auto ct_single_bit2 = ccLWE->EvalFunc(ct_small2, lut_bits[b]);

            // Decrypt to verify
            ccLWE->Decrypt(privateKeyFHEW, ct_single_bit1, &bitResult, p);
            bits1.push_back(bitResult);

            ccLWE->Decrypt(privateKeyFHEW, ct_single_bit2, &bitResult, p);
            bits2.push_back(bitResult);

            auto r = ccLWE->EvalBinGate(XOR, ct_single_bit1, ct_single_bit2);
            ccLWE->Decrypt(privateKeyFHEW, r, &bitResult, p);
            res.push_back(bitResult);

        }
        if (i==0) {
            reverse(bits1.begin(), bits1.end());
            //reverse(bits2.begin(), bits2.end());
        }
        std::cout << "input1 Index " << i << " (Value " << input1[i] << "): ";
        for (auto e: bits1) {
            cout << e << "";
        }
        std::cout << " (Binary)" << std::endl;

        std::cout << "input2 Index " << i << " (Value " << input2[i] << "): ";
        for (auto e: bits2) {
            cout << e << "";
        }
        std::cout << " (Binary)" << std::endl;

        std::cout << "Result: " ;
        for (auto e: res) {
            cout << e << "";
        }
        std::cout << " (Binary)" << std::endl;
        cout << "----------------\n";
    }

    return 0;
}

This is the output:

Generating Bootstrapping Keys...
Setup Complete.

p = 16

Extracting Bits via EvalFunc...
input1 Index 0 (Value 3): 11 (Binary)
input2 Index 0 (Value 0): 00 (Binary)
Result: 00 (Binary)

I checked the examples on the Github repo, but the output after FHEW switching seems to be in base 16. My goal is just to perform homomorphic bin gates between CKKS ciphertexts.

I checked the discussion of this question, but it was not helpful for me!

Thanks in advance!

There are two points:

  1. To use EvalFunc, you need to set params.SetArbitraryFunctionEvaluation(true) as done in the FuncViaSchemeSwitching() from the scheme-switching.cpp example. Note that as mentioned in the question you referenced, this only works for p \leq 8.
  2. To use EvalDecomp, you need to use large precision, which is incompatible with p \leq 8. So if you are in the above case, it’s better to define an LUT and use EvalFunc for your needs.

That being said, scheme switching is not the best tool available anymore. You can evaluate directly LUTs over vectors of numbers, essentially a vectorized FHEW, with the use of functional bootstrapping. Please see functional-bootstrapping-ckks.cpp

Thanks for your response, @andreea.alexandru! I have been exploring the functional bootstrapping example that you referenced and it works as expected. But is it possible to decrypt the final result from a CKKS ciphertext without the need to convert it into RLWE?

Basically, what I’m trying to do is use the functional bootstrapping without the need to have switch between CKKS and RLWE back and forth, or at least convert CKKS ciphertexts to RLWE for only evaluating FBT and continue operating on CKKS ciphertexts. When I tried to decrypt the CKKS after FBT, I got Decode(): The decryption failed because the approximation error is too high. Check the parameters. error.

This is the how my code looks like

void just_ckks(BigInteger QBFVInit, BigInteger PInput, BigInteger POutput, BigInteger Q, BigInteger Bigq,
                             uint64_t scaleTHI, size_t order, uint32_t numSlots, uint32_t ringDim,
                             uint32_t levelsComputation) {

    auto my_func = [](int64_t box) {
        return //some code ... ;
    };
    
    
    std::vector<int64_t> x1 = {10};
    std::vector<int64_t> x2 = {13};
    
    std::vector<std::complex<double> > my_coeff;

    my_coeff = GetHermiteTrigCoefficients(my_func, PInput.ConvertToInt(), order, scaleTHI);
    
    /*
     * Setting parameters and depth...
     */

    auto cc = GenCryptoContext(parameters);
    cc->Enable(PKE);
    cc->Enable(KEYSWITCH);
    cc->Enable(LEVELEDSHE);
    cc->Enable(ADVANCEDSHE);
    cc->Enable(FHE);

    auto keyPair = cc->KeyGen();
    cc->EvalFBTSetup(my_coeff, numSlotsCKKS, PInput, POutput, Bigq, keyPair.publicKey, {0, 0}, lvlb,
                     levelsAvailableAfterBootstrap, levelsComputation, order);

    
    bool flagBR = (lvlb[0] != 1 || lvlb[1] != 1);
    
    auto ep = SchemeletRLWEMP::GetElementParams(keyPair.secretKey, depth - (levelsAvailableBeforeBootstrap > 0));

    auto ctxtBFV1 = SchemeletRLWEMP::EncryptCoeff(x1, QBFVInit, PInput, keyPair.secretKey, ep, flagBR);
    auto ctxtBFV2 = SchemeletRLWEMP::EncryptCoeff(x2, QBFVInit, PInput, keyPair.secretKey, ep, flagBR);

    SchemeletRLWEMP::ModSwitch(ctxtBFV1, Q, QBFVInit);
    SchemeletRLWEMP::ModSwitch(ctxtBFV2, Q, QBFVInit);
    
    auto ctxt1 = SchemeletRLWEMP::ConvertRLWEToCKKS(*cc, ctxtBFV1, keyPair.publicKey, Bigq, numSlotsCKKS,
                                                    depth - (levelsAvailableBeforeBootstrap > 0));
    auto ctxt2 = SchemeletRLWEMP::ConvertRLWEToCKKS(*cc, ctxtBFV2, keyPair.publicKey, Bigq, numSlotsCKKS,
                                                    depth - (levelsAvailableBeforeBootstrap > 0));

    /**
     * some + and * on CKKS ctxt...
     */

    std::vector<Ciphertext<DCRTPoly> > complexExp;
    Ciphertext<DCRTPoly> my_res;


    auto mvb = cc->EvalMVBPrecompute(ctxt_res, my_coeff, PInput.GetMSB() - 1, ep->GetModulus(), order);
    
    my_res =
            cc->EvalMVB(mvb, my_coeff, PInput.GetMSB() - 1, scaleTHI, levelsComputation, order);
    

    Plaintext p;
    cc->Decrypt(keyPair.secretKey, my_res, &p); // <--- Error ...
    cout << "Result: " << p->GetCKKSPackedValue() << endl;  

    
    auto polys = SchemeletRLWEMP::ConvertCKKSToRLWE(my_res, Q);
    
    auto computed =
            SchemeletRLWEMP::DecryptCoeff(polys, Q, POutput, keyPair.secretKey, ep, numSlotsCKKS, numSlots, flagBR);
    std::cerr << "result = ";
    std::copy_n(computed.begin(), numSlots, std::ostream_iterator<int64_t>(std::cerr, " "));
    
}

int main() {
    just_ckks(QBFVINIT, BigInteger(512), BigInteger(512), (BigInteger(1) << 47), (BigInteger(1) << 47),
                            32, 1, 1, 2048, 3);
}

Thanks for your time and consideration.

The functionality you want can be achieved, but the current capability available in OpenFHE was not designed for this so the solution will be a bit clunky. We are planning to add a CKKS-only version of functional bootstrapping in v1.6.

Please read the paper describing the functional bootstrapping capability (Algorithm 1) to understand the inner working of EvalFBT. The reason why decryption fails in your case is that you are decrypting and decoding a ciphertext that is in coefficients, not in slots. So if you want to correctly decrypt the CKKS ciphertext resulting from the functional bootstrapping, you should skip the homomorphic decoding step. This is exposed in OpenFHE as EvalFBTNoDecoding (or EvalMVBNoDecoding). You need to also set levelsAvailableAfterBootstrap to at least 1, and to multiply by scaleTHI, which currently happens in the homomorphic decoding step (now skipped).

    ctxtAfterFBT = cc->EvalFBTNoDecoding(ctxt, coeff, PInput.GetMSB() - 1, ep->GetModulus(), order);
    ctxtAfterFBT = cc->EvalMult(ctxtAfterFBT, scaleTHI);

However, because of the internal workings of homomorphic FFT, if you set the level budget to be larger than 1, the message will have the order bit reversed; setting the level budget to 1 resolves this, but it is very slow at large ring dimensions. If your computation does not involve rotations, you can ignore the bit reversed order, and just deal with it at the final result. Otherwise, we have a flag to encode the input already bit reversed in RLWE, such that after EvalFBTNoDecoding, it will be in the natural order. This is explained in the MultiValueBootstrapping example, which you based your code on.

Thank you so much for your help!