CryptoContext serialization vs regeneration

Hello,

I’ve looked at the serialization examples (focused on CKKS) and I see that the suggested approach to handle the context for a client-server application is to generate it in one of the parties which serializes it and sends it to the other party, which deserializes it to a fresh context.

Given the fact that the parameters of the FHE scheme are public, why do we need to send the context in the first place? Couldn’t we regenerated it when needed? Notice here that I’m only considering avoiding the serialization of CryptoContext, I know that the keys do need to be serialized and sent for everything to work properly.

I’m also wondering if this is even possible with OpenFHE, since I’ve run a small example which gives me the following error:

terminate called after throwing an instance of ‘lbcrypto::OpenFHEException’
what(): /usr/local/include/openfhe/pke/cryptocontext.h:l.317:TypeCheck(): Ciphertext was not created in this CryptoContext

For the small test, I’ve written this function which I use in both client and server side to generate the same CryptoContext object.

lbcrypto::CryptoContext<lbcrypto::DCRTPoly> InitializeCryptoContext(const ConfigLoader& configLoader) {
    
   lbcrypto::CryptoContextFactory<lbcrypto::DCRTPoly>::ReleaseAllContexts();
   
   lbcrypto::CCParams<lbcrypto::CryptoContextCKKSRNS> params;

    params.SetMultiplicativeDepth(configLoader.GetMultDepth());
    params.SetScalingModSize(configLoader.GetScaleMod());
    params.SetFirstModSize(configLoader.GetFirstMod());
    params.SetRingDim(configLoader.GetRingDimension());
    params.SetSecurityLevel(lbcrypto::SecurityLevel::HEStd_128_classic);
    params.SetScalingTechnique(lbcrypto::FIXEDAUTO);

    auto cc = GenCryptoContext(params);
    
    cc->Enable(lbcrypto::PKE);
    cc->Enable(lbcrypto::KEYSWITCH);
    cc->Enable(lbcrypto::LEVELEDSHE);
    cc->Enable(lbcrypto::ADVANCEDSHE);

    return cc;
}

Note: if I don’t call ReleaseAllContexts() everything works ok, since we are testing in a single machine.

Each cryptocontext has two (non-key-dependent) parts: the metadata and precomputed parameters. The metadata (like all lattice, polynomial, and other crypto-related parameters) get serialized as they are based on certain input parameters. The precomputed parameters get regenerated on the fly (when deserializing). The cryptocontext metadata can hypothetically get regenerated but it can be error-prone and very risky in practice (and even slow). Working with serialized cryptocontext metadata is certainly both better from robustness and security perspectives.

The message you got is related to the type check between cryptocontext and ciphertext. The pointer referring to the cryptocontext in the ciphertext does not match the pointer for the cryptocontext with which you are working.

In general, it is always good to serialize and deserialize cryptocontext metadata to guarantee expected/secure behavior. Recreating all core crypto parameters on the fly would be error-prone and potentially insecure.

If you want to reduce the time of deserialization (to avoid regenerating cryptocontext precomputations multiple times), you might want to read Deserializing Ciphertexts is slow

Thanks for the clear explanation Yuriy!

As a follow up question, I see that we can get the cryptocontext from a ciphertext using the method GetCryptoContext. If the client-server application already sends a ciphertext, is there any advantage to sending the serialized cryptocontext over just extracting it from the sent ciphertext using GetCryptoContext?

It can be done either way. We often deserialize the crypto context explicitly to make sure we do it only once and then turn off the cryptocontext precomputations (see the topic referenced above). If you do not do it, the cryptocontext will get deserialized automatically (with every ciphertext/key object), causing the cryptocontext precomputations to be run every time (sometimes this may significantly affect the runtime of the user application).

1 Like