I am currently working on developing a framework for doing encrypted inference using OpenFHE. For this I want to have the possibility to feed an ONNX model file to this software and let OpenFHE handle the ML operations of the model. Now that means that I just need a tool from which I can extract the models layers and the underlying parameters and feed that information to my OpenFHE inference library without really using the full ONNX runtime which has it’s own hardware acceleration techniques to carry out these operations. Does anyone know of such a tool, which I can easily include into my CMake project?
Cheers and thanks in advance
PS: I don’t know if this is the proper category for this question. If you do not think so feel free to change it or let me know.
I’d say probably not - OpenFHE is a relatively young library and deep learning inference is still active work. You might need to write something on your own
Thanks for your response. The way I am doing it right now is to just write python wrappers around my C++ code and then just use python to handle the machine learning set up.
I will, once I have something running. But I did not write a converter for onnx, I am just porting my OpenFHE code to a Python library with Pybind11, so that (also among other reasons) I can use the onnx libraries provided in Python