Settings Today

Performant on-device inferencing with ONNX Runtime

Image depicting ONNX Runtime and Pieces product logos.

As machine learning usage continues to permeate across industries, we see broadening diversity in deployment targets, with companies choosing to run locally on-client versus cloud-based services for security, performance, and cost reasons. On-device machine learning model serving is a difficult task, especially given the limited bandwidth of early-stage startups. This guest post from the team at Pieces shares the problems and solutions evaluated for their on-device model serving stack and how ONNX Runtime serves as their backbone of success.

Local-first machine learning

Pieces is a code snippet management tool that allows developers to save, search, and reuse their snippets without interrupting their workflow. The magic of Pieces is that it automatically enriches these snippets so that they're more useful to the developer after being stored in Pieces. A large part of this enrichment is driven by our machine learning models that provide programming language detection, concept tagging, semantic description, snippet clustering, optical character recognition, and much more. To enable full coverage of the developer workflow, we must run these models from the desktop, terminal, integrated development environment, browser, and team communication channels.

Like many businesses, our first instinct was to serve these models as cloud endpoints; however, we realized this wouldn't suit our needs for a few reasons. First, in order to maintain a seamless developer workflow, our models must have low latency. The round trip to the server is lost time we can't afford. Second, our users are frequently working with proprietary code, so privacy is a primary concern. Sending this data over the wire would expose it to potential attacks. Finally, hosting models on performant cloud machines can be very expensive and is an unnecessary cost in our opinion. We firmly believe that advances in modern personal hardware can be taken advantage of to rival or even improve upon the performance of models on virtual machines. Therefore, we needed an on-device model serving platform that would provide us with these benefits while still giving our machine learning engineers the flexibility that cloud serving offers. After some trial and error, ONNX Runtime emerged as the clear winner.

Our ideal machine learning runtime

When we set out to find the backbone of our machine learning serving system, we were looking for the following qualities:

  • Easy implementationIt should fit seamlessly into our stack and require minimal custom code to implement and maintain. Our application is built in Flutter, so the runtime would ideally work natively in the Dart language so that our non-machine learning engineers could confidently interact with the API.
  • BalancedAs I mentioned above, performance is key to our success, so we need a runtime that can spin up and perform inference lightning fast. On the other hand, we also need useful tools to optimize model performance, ease model format conversion, and generally facilitate the machine learning engineering processes.
  • Model coverageIt should support the vast majority of machine learning model operators and architectures, especially cutting-edge models, such as those in the transformer family.

TensorFlow Lite

Our initial research revealed three potential options: TensorFlow Lite, TorchServe, and ONNX Runtime. TensorFlow Lite was our top pick because of how easy it would be to implement. We found an open source Dart package which provided Dart bindings to the TensorFlow Lite C API out-of-the-box. This allowed us to simply import the package and immediately have access to machine learning models in our application without worrying about the lower-level details in C and C++.

The tiny runtime offered great performance and worked very well for the initial models we tested in production. However, we quickly ran into a huge blocker: converting other model formats to TensorFlow Lite is a pain. Our first realization of this limitation came when we tried and failed to convert a simple PyTorch LSTM to TensorFlow Lite. This spurred further research into how else we might be limited. We found that many of the models we planned to work on in the future would have to be trained in TensorFlow or Keras because of conversion issues. This was problematic because we've found that there's not a one-size-fits-all machine learning framework. Some are better suited for certain tasks, and our machine learning engineers differ in preference and skill level for each of these frameworksunfortunately, we tend to favor PyTorch over TensorFlow.

This issue was then compounded by the fact that TensorFlow Lite only supports a subset of the machine learning operators available in TensorFlow and Kerasimportantly, it lags in more cutting-edge operators that are required in new, high-performance architectures. This was the final straw for us with TensorFlow Lite. We were looking to implement a fairly standard transformer-based model that we'd trained in TensorFlow and found that the conversion was impossible. To take advantage of the leaps and bounds made in large language models, we needed a more flexible runtime.

TorchServe

Having learned our lesson on locking ourselves into a specific training framework, we opted to skip testing out TorchServe so that we would not run into the same conversion issues.

ONNX Runtime saves the day

Like TensorFlow Lite, ONNX Runtime gave us a lightweight runtime that focused on performance, but where it really stood out was the model coverage. Being built around the ONNX format, which was created to solve interoperability between machine learning tools, it allowed our machine learning engineers to choose the framework that works best for them and the task at hand and have confidence that they would be able to convert their model to ONNX in the end. This flexibility brought more fluidity to our research and development process and reduced the time spent preparing new models for release.

Another large benefit of ONNX Runtime for us is a standardized model optimization pipeline, truly becoming the “balanced” tool we were looking for. By serving models in a single format, we're able to iterate through a fixed set of known optimizations until we find the desired speed, size, and accuracy tradeoff for each model. Specifically, for each of our ONNX models, the last step before production is to apply different levels of ONNX Runtime graph optimizations and linear quantization. The ease of this process is a quick win for us every time.

Speaking of feature-richness, a final reason that we chose ONNX Runtime was that the baseline performance was good but there were many options we could implement down the road to improve performance. Due to the way we currently build our app, we have been limited to the vanilla CPU builds of ONNX Runtime. However, an upcoming modification to our infrastructure will allow us to utilize execution providers to serve optimized versions of ONNX Runtime based on a user's CPU and GPU architecture. We also plan to implement dynamic thread management as well as IOBinding for GPU-enabled devices.

Production workflow

Now that we've covered our reasoning for choosing ONNX Runtime, we'll do a brief technical walkthrough of how we utilize ONNX Runtime to facilitate model deployment.

Model conversion

After we've finished training a new model, our first step towards deployment is getting that model into an ONNX format. The specific conversion approach depends on the framework used to train the model. We have successfully used the conversion tools supplied by HuggingFace, PyTorch, and TensorFlow.

Some model formats are not supported by these conversion tools, but luckily ONNX Runtime has its own internal conversion utilities. We recently used these tools to implement a T5 transformer model for code description generation. The HuggingFace model uses a BeamSearch node for text generation that we were only able to convert to ONNX using ONNX Runtime's convert generation.py tool, which is included in their transformer utilities.

ONNX model optimization

Our first optimization step is running the ONNX model through all ONNX Runtime optimizations, using GraphOptimizationLevel.ORT_ENABLE_ALL, to reduce model size and startup time. We perform all these optimizations offline so that our ONNX Runtime binary doesn't have to perform them on startup. We are able to consistently reduce model size and latency very easily with this utility.

Our second optimization step is quantization. Again, ONNX Runtime provides an excellent utility for this. We've used both quantize_dynamic() and quantize_static() in production, depending on our desired balance of speed and accuracy for a specific model.

Inference

Once we have an optimized ONNX model, it's ready to be put into production. We've created a thin wrapper around the ONNX Runtime C++ API which allows us to spin up an instance of an inference session given an arbitrary ONNX model. We based this wrapper on the onnxruntime-inference-examples repository. After developing this simple wrapper binary, we were able to quickly get native Dart support using the Dart FFI (Foreign Function Interface) to create Dart bindings for our C++ API. This reduces the friction between teams at Pieces by allowing our Dart software engineers to easily inject our machine learning efforts into all of our services.

Conclusion

On-device machine learning requires a tool that is performant yet allows you to take full advantage of the current state-of-the-art machine learning models. ONNX Runtime gracefully meets both needs, not to mention the incredibly helpful ONNX Runtime engineers on GitHub that are always willing to assist and are constantly pushing ONNX Runtime forward to keep up with the latest trends in machine learning. It's for these reasons that we at Pieces confidently rest our entire machine learning architecture on its shoulders.

Learn more about ONNX Runtime

The post Performant on-device inferencing with ONNX Runtime appeared first on Microsoft Open Source Blog.


Published 580 days ago

Go Back to Reading NewsBack Read News Collect this News Article


For peering opportunity Autonomouse System Number: AS401345 Custom Software Development at ErnesTech Email AddressContact: [email protected]