Hi all,
I’ve been working on a ResNet-20 model using OpenFHE, and I encountered a strange issue when changing the number of slots in my ciphertext after a downsampling operation.
Approach:
-
I downsample encrypted data using a convolution layer with striding.
-
After downsampling, I want to change the number of slots in the ciphertext to the new size which is typically half the number of the original size.
-
To do this, I:
-
Clear all evaluation keys (Mult
, Sum
, Rotation
) using ClearEvalMultKeys()
, ClearEvalSumKeys()
, etc.
-
Deserialize and load a new set of evaluation keys appropriate for the next layers.
-
Call ciphertext->SetSlots(newNumSlots);
on the downsampled ciphertext to reflect the new logical size.
Problem:
When I decrypt the ciphertext before calling SetSlots(newNumSlots)
, the output looks correct. For example:
Layer 2 Downsampled - new size: 8192
Range: [ -2.61888 , 2.32704 ]
Index: 2674
However, after calling SetSlots(newNumSlots)
, the decrypted values are unexpectedly halved:
Range: [ -1.30945 , 1.16353 ]
Index: 2674
This scaling issue is consistent and only occurs after using SetSlots()
throughout my entire network.
Question:
Does SetSlots()
implicitly modify the scaling or encoding metadata in a way that could halve the decrypted values?
What is the recommended way to adjust the number of slots in a ciphertext after downsampling while preserving correct scaling?
Any insights or explanations would be greatly appreciated!
My crypto context is set with the following parameters
auto secretKeyDist = SPARSE_TERNARY;
ScalingTechnique rescaleTech = FLEXIBLEAUTO;
int ringDegree = 15;
int numSlots = 14;
int multDepth = 11;
int dcrtBits = 50;
int firstMod = 54;
int digitSize = 3;
vector<uint32_t> levelBudget = {4, 4}
It’s difficult to say something definite about the issue you are running into without some minimum working example, but here are some things to keep in mind.
The ciphertext (I assume you are working with CKKS) encrypts a vector of N/2 length, which is natively the size of the slots, for N the ring dimension. The operations over ciphertexts are always applied over this length, regardless of the set batch size or number of slots. In some cases, particularly in bootstrapping, knowing that your message vector is actually smaller than N/2 can improve efficiency by a lot (we can run smaller dimension FFTs). When setting a smaller number of slots, we are working in a smaller subring (see Background of https://eprint.iacr.org/2018/1043.pdf). Internally, using sparse packing for n < N/2 slots means your n message slots of interest are cloned for N/(2n) times.
If you start with a ciphertext with full packing (N/2 slots) and want to move to a sparser packing (n < N/2) because only the first n values are non-zero, you have to do this change manually, i.e., apply rotations and additions. Note that if the values after n are not already zero, you have to apply multiplicative masking to zero them out. SetSlots
only sets a parameter, telling the library how to interpret the current ciphertext. This is why you see a scaling difference: your ciphertext is still the same, but you apply decryption with a smaller number of slots. See these discussions in the forum for more explanations: OpenFHE SetSlots method: what is the usage scenario? , How does CKKS Sparse Encoding work exactly? , How to transform a ciphertext from one slot count to another without decryption.
Also, we work with power of two slots. Is int numSlots = 14;
referring to the number of slots being 2^{14}?
Thank you for the clarification in your previous response.
Yes, I am using CKKS, and when I say numSlots = 14
, I mean the number of slots is 2^{14}.
Your explanation matches my understanding of how SetSlots()
should work, which makes it even more puzzling that the ciphertext values change after calling it.
I have a few specific questions:
-
Am I correct in understanding that I must run EvalBootstrapSetup()
and EvalBootstrapKeyGen()
with the number of slots to set my context before calling SetSlots()
?
If so, how do these functions interact with the original initialized Bootstrap context values?
-
I understand I need to clear all rotation, multiplication, and summation keys before loading the new context that matches the desired slot size for SetSlots()
.
My question: are the Sum
and Mult
keys generated differently depending on the slot count used during EvalBootstrapSetup()
and EvalBootstrapKeyGen()
? Or can I use the same Sum
and Mult
keys regardless of the slot configuration?
-
Is there a possibility that a ciphertext noise is contributing to the value scaling issue I’m observing after calling SetSlots()
?
Thanks again for your help.
Hello @andreea,
I’ve put together a smaller code example to reproduce the error.
I’d appreciate any pointers or insights you might have on what could be going wrong with my work.
#include <iostream>
#include <sys/stat.h>
#include "FHEController.h"
using namespace std;
CryptoContext<DCRTPoly> context;
void printPtext(Plaintext packedVec) {
vector<complex<double>> finalResult = packedVec->GetCKKSPackedValue();
cout << finalResult << endl;
cout << endl;
}
int main(int argc, char *argv[]) {
int numVal = 12;
int ringDim = 1 << numVal;
int numSlots = 1 << (numVal - 1);
int halfnumSlots = 1 << (numVal - 2);
int multDepth = 8;
int dcrtBits = 50;
int firstMod = 54;
int digitSize = 3;
vector<uint32_t> levelBudget = {4, 4};
vector<uint32_t> bsgsDim = {0, 0};
CCParams<CryptoContextCKKSRNS> parameters;
auto secretKeyDist = SPARSE_TERNARY;
parameters.SetRingDim(ringDim);
parameters.SetBatchSize(numSlots);
parameters.SetScalingModSize(dcrtBits);
parameters.SetFirstModSize(firstMod);
parameters.SetNumLargeDigits(digitSize);
parameters.SetSecretKeyDist(secretKeyDist);
parameters.SetSecurityLevel(lbcrypto::HEStd_NotSet);
parameters.SetScalingTechnique(FLEXIBLEAUTO);
auto circuit_depth = multDepth + FHECKKSRNS::GetBootstrapDepth(levelBudget, secretKeyDist);
parameters.SetMultiplicativeDepth(circuit_depth);
context = GenCryptoContext(parameters);
context->Enable(PKE);
context->Enable(KEYSWITCH);
context->Enable(LEVELEDSHE);
context->Enable(ADVANCEDSHE);
context->Enable(FHE);
// cout << "Generate Keys ......" << endl;
auto keyPair = context->KeyGen();
context->EvalMultKeyGen(keyPair.secretKey);
context->EvalSumKeyGen(keyPair.secretKey);
context->EvalBootstrapSetup(levelBudget, bsgsDim, numSlots);
context->EvalBootstrapKeyGen(keyPair.secretKey, numSlots);
// generate a random vector
vector<double> dataVec(halfnumSlots);
random_device rd;
mt19937 gen(rd());
uniform_real_distribution<double> dist(-2.0, 2.0);
for (auto &val : dataVec) {
val = dist(gen);
}
Ptext plaintext = context->MakeCKKSPackedPlaintext(dataVec, 1, 1);
plaintext->SetLength(halfnumSlots);
auto encryptedData = context->Encrypt(keyPair.publicKey, plaintext);
encryptedData->SetSlots(numSlots);
cout << endl << " - Encrypted Data of " << dataVec.size() << " Elements" << endl;
/************************************************************************************************ */
Ptext plaintextDec;
context->Decrypt(keyPair.secretKey, encryptedData, &plaintextDec);
plaintextDec->SetLength(10);
printPtext(plaintextDec);
context->EvalBootstrapSetup(levelBudget, bsgsDim, halfnumSlots);
context->EvalBootstrapKeyGen(keyPair.secretKey, halfnumSlots);
cout << " - Slots Changed Output" << endl;
encryptedData->SetSlots(halfnumSlots);
context->Decrypt(keyPair.secretKey, encryptedData, &plaintextDec);
plaintextDec->SetLength(10);
printPtext(plaintextDec);
/************************************************************************************************ */
return 0;
}
The decryption is scaled because you decrypt a ciphertext with a different number of slots specified. It is related to the encoding and decoding being FFT computations. At a very high level, your message is scaled by 1/N during encoding, but if you halve the number of slots, it only gets scaled back by N/2.
To correctly decode a sparse message (in this case for half the number of the slots used in encoding), you need to have the correct subring representation, which means you have to manually clone it, as I suggested.
encryptedData->SetSlots(numSlots);
auto temp = context->EvalAtIndex(encryptedData, halfnumSlots);
context->EvalAddInPlace(encryptedData, temp);
cout << " - After cloning" << endl;
encryptedData->SetSlots(halfnumSlots);
context->Decrypt(keyPair.secretKey, encryptedData, &plaintextDec);
plaintextDec->SetLength(10);
printPtext(plaintextDec);
Regarding your previous questions (make sure you check this out):
EvalBootstrapSetup
needs to be run for each number of slots. OpenFHE is designed to hold precomputations for multiple values of slots, but keep in mind the storage can get large.
- The keys themselves do not depend on the number of slots (they depend on the ring dimension), however, the rotation indices required differ for different number of slots (and hence, you might need new rotation keys). You should run
EvalBootstrapKeyGen
for each instance of slots you need, and only the set of the keys will be held (no copies).
- You do need to run
EvalMultKeyGen
more than once.
EvalSumKeyGen
generates the corresponding rotation keys using the batchSize from the encoding parameters (what you set in the cryptocontext generation, by default ringDim/2). When you call EvalSum
, you can call it with batch sizes powers of two smaller than the initial batchSize.
Hello Andreea, Thank you so much for your feedback.
With the approach, I can get the right value but it looks like there’s a light change in the resulting values.
I am still debugging but your feedback was very helpful.
If you do many additions, for example by going from a very large slots count to a small slots count, it is expected for errors to accumulate in the least significant bits.