ONNX for passing model to OpenFHE mathematics

Hello everyone,

I am currently working on developing a framework for doing encrypted inference using OpenFHE. For this I want to have the possibility to feed an ONNX model file to this software and let OpenFHE handle the ML operations of the model. Now that means that I just need a tool from which I can extract the models layers and the underlying parameters and feed that information to my OpenFHE inference library without really using the full ONNX runtime which has it’s own hardware acceleration techniques to carry out these operations. Does anyone know of such a tool, which I can easily include into my CMake project?

Cheers and thanks in advance

PS: I don’t know if this is the proper category for this question. If you do not think so feel free to change it or let me know.

I’d say probably not - OpenFHE is a relatively young library and deep learning inference is still active work. You might need to write something on your own

Thanks for your response. The way I am doing it right now is to just write python wrappers around my C++ code and then just use python to handle the machine learning set up.

1 Like

Well this is an open source community so please feel free to share your project (especially the ONNX converter) in our show-and-tell :slight_smile:

1 Like

I will, once I have something running. But I did not write a converter for onnx, I am just porting my OpenFHE code to a Python library with Pybind11, so that (also among other reasons) I can use the onnx libraries provided in Python :slight_smile:

1 Like

Judy a heads up we actually have a WIP python pybind11 port, so if you’d like to collaborate please let us know!