Artificial Intelligence (AI)
Discuss current events in AI and technological innovations with Intel® employees
485 Discussions

Simplifying Cloud to Edge AI Deployments with the Intel® Distribution of OpenVINO™ Toolkit, Microsoft Azure, and ONNX Runtime

MaryT_Intel
Employee
0 0 2,316
Our life is frittered away by detail. Simplify, Simplify, Simplify

Significant technological innovations usually follow a well-established pattern. A small group of brilliant minds stumble upon some incredible innovation, which is then quickly adopted by a group of similarly minded early enthusiasts who love to tinker and provide valuable feedback. Mass adoption by the millions, however, occurs only after the technology is made simple to use, with an interface that makes it easy and intuitive for the general population to embrace. Ease of use is critical for another important user group: the builders and creators. These designers, developers and engineers are tasked with solving real life customer and business problems by repurposing a new technology to solve specific use cases. Cellphones, computers, automobiles, household appliances, airplanes — in short, all the essentials of modern everyday life we take for granted today — have all followed similar trajectories through two hundred thousand years of human innovation.

We are in the cusp of a similar inflection point with Artificial Intelligence (AI). As we get better at building AI algorithms, and data models become highly accurate and highly performant, developer workflows become the bottleneck constraining productivity. Unnecessary, tedious work detracts developers from real value creation. Hours are wasted in tinkering with models just to make them work efficiently on specific hardware, trying different Python versions just to get the correct libraries to work or figuring out which Jupyter notebook to use among the hundreds available open sourced. Critical chains of thoughts get disrupted. Cognitive overload becomes a real challenge. In this blog post, we propose a new method to reduce the logistical burdens on developers so they have more bandwidth to innovate.

Public clouds are naturally attractive to AI developers, offering ample infrastructure resources, integrated development environments and excellent tools and support. However, consider the long list of questions cloud-native, AI developers must address as they contemplate deploying an AI model to a factory floor or retail shop to solve a customer’s use case:

  • Which model should I use? Which network? Which algorithm? Which framework?
  • What data should I use? How can I annotate it? How can I manipulate the data to train my model?
  • Which cloud should I use?
  • Which Python version will work best?
  • Which Jupyter notebook should I use?
  • Will I need a server to run inference workloads? Which server should I use?
  • Which hardware should I use for edge deployments? How will my performance be?
  • Will I be able to meet my service level agreement at the edge or in the cloud?
  • How will I be able to retrain & re-deploy my model repeatedly within a few minutes?
  • Will I be able to leverage continuous integration or deployment (CI/CD)?
Figure 1. Considerations of a cloud-native, AI developer when deploying models.

Hundreds of thousands of such use cases are emerging everyday as everything from old industrial factories to self driving cars start using AI to solve real-world problems.

A Microsoft Azure-based solution, Custom Vision, goes a great distance in streamlining the developer experience. The entire developer experience uses a graphical user interface (GUI). In fact, there is not a single line of code needed to run the entire AI-based computer vision model from training to deployment.

Developers can easily upload images, annotate them with a few keystrokes, and train image classification or object detection models in the cloud—all within a few minutes. The models can then be downloaded in a wide variety of formats to your hardware of choice. With a clicks, device twins in the Azure IoT Hub can be updated to run the new model. Retraining follows a similar set of easy steps. The entire workflow from training to deployment becomes minimized from a hours to a few minutes.

We have released the OpenVINO™ toolkit: Ready-to-Deploy AI Vision Module app in the Microsoft Azure marketplace, based on the Custom Vision application and using the Intel® Distribution of OpenVINO™ toolkit to extend the capabilities of the app to Intel architecture-based edge devices.

Figure 2. Step-by-step workflow of ready-to-deploy application on Microsoft Azure marketplace.

Developers can now quickly train or retrain their models with Azure Machine Learning (AML), and deploy their AI models to Intel based edge devices, where they can enjoy the incredible boost provided by Intel Distribution of OpenVINO toolkit and Intel Neural Compute Stick 2 (or alternatively, other Intel accelerators).

Under the covers, the integration leverages the ONNX Runtime using the OpenVINO toolkit as the Execution Provider. The process begins with a one-time setup of the edge device to be connected to the Microsoft Azure cloud and with a camera device of choice. Once complete, developers can subsequently run their training jobs in Microsoft Azure and deploy their models to the edge within minutes. The process can then be easily repeated any number of times.

Figure 3. Train-to-deploy workflow using Azure Machine Learning, Intel Distribution of OpenVINO toolkit and ONNX Runtime.

This integration works with the following hardware today and will soon enable developers to leverage the entire ecosystem of deployment ready hardware and services, such as Intel IoT RFP Ready Kits and Intel IoT Market Ready Solutions, with zero-code model training and deployment. Retraining and redeployment require simple steps – all accomplished in a few minutes. The end-to-end workflow is fully automated – there are no manual model conversions required. What’s more, developers don’t have to worry about outdated versions of code at the edge as the containerized model image contains its own image of the Intel Distribution of OpenVINO toolkit. Model updates are non-disruptive as the application takes care of switching between old and new models seamlessly.

Conclusion

We hope this integration will be an important step for the AI developer community to reap the benefits of AI with as little cognitive overload as possible. The steps required for developers to get started are minimized, all the while incorporating enough flexibility to accommodate any customization to the app in order to solve use case-specific requirements. We are committed to ongoing improvements; we encourage users to give it a try and send us your feedback!

Notices and Disclaimers

FTC Optimization Notice

Intel technologies may require enabled hardware, software or service activation.

No product or component can be absolutely secure.

Your costs and results may vary.

© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others

About the Author
Mary is the Community Manager for this site. She likes to bike, and do college and career coaching for high school students in her spare time.