[译]在C#中使用可移植的ONNX AI模型
By robot-v1.0
本文链接 https://www.kyfws.com/ai/using-portable-onnx-ai-models-in-csharp-zh/
版权声明 本博客所有文章除特别声明外,均采用 BY-NC-SA 许可协议。转载请注明出处!
- 7 分钟阅读 - 3105 个词 阅读量 0在C#中使用可移植的ONNX AI模型(译文)
原文地址:https://www.codeproject.com/Articles/5278505/Using-Portable-ONNX-AI-Models-in-Csharp
原文作者:Keith Pijanowski
译文由本站 robot-v1.0 翻译
前言
In this article, I provided a brief overview of the ONNX Runtime and the ONNX format.
在本文中,我简要介绍了ONNX运行时和ONNX格式.
Here I show how to load and run an ONNX model using C# in ONNX Runtime. I also include a sample for this article contains a working console application that demonstrates all the techniques shown here.
在这里,我展示了如何在ONNX Runtime中使用C#加载和运行ONNX模型.我还为本文提供了一个示例,其中包含一个工作的控制台应用程序,该应用程序演示了此处显示的所有技术.
- 下载源547.1 KB(Download source - 547.1 KB) 在关于2020年使用便携式神经网络的系列文章中,您将学习如何在x64架构上安装ONNX并在C#中使用它.(In this article in our series about using portable neural networks in 2020, you’ll learn how to install ONNX on an x64 architecture and use it in C#.)
微软与Facebook和AWS共同开发了ONNX. ONNX格式和ONNX运行时都具有行业支持,以确保所有重要框架都能够将其图形导出到ONNX,并且这些模型可以在任何硬件配置上运行.(Microsoft co-developed ONNX with Facebook and AWS. Both the ONNX format and ONNX Runtime have industry support to make sure that all the important frameworks are capable of exporting their graphs to ONNX and that these models can run on any hardware configuration.)
ONNX Runtime是用于运行已转换为ONNX格式的机器学习模型的引擎.传统机器学习模型和深度学习模型(神经网络)都可以导出为ONNX格式.运行时可以在Linux,Windows和Mac上运行,并且可以在各种芯片体系结构上运行.它还可以利用诸如GPU和TPU之类的硬件加速器.但是,没有针对操作系统,芯片体系结构和加速器的每种组合的安装包,因此,如果不使用任何一种常见组合,则可能需要从源代码构建运行时.检查(The ONNX Runtime is an engine for running machine learning models that have been converted to the ONNX format. Both traditional machine learning models and deep learning models (neural networks) can be exported to the ONNX format. The runtime can run on Linux, Windows, and Mac, and can run on a variety of chip architectures. It can also take advantage of hardware accelerators such as GPUs and TPUs. However, there is not an install package for every combination of OS, chip architecture, and accelerator, so you may need to build the runtime from source if you are not using one of the common combinations. Check the) ONNX Runtime网站(ONNX Runtime website) 以获得所需组合的安装说明.本文将展示如何在具有默认CPU的x64体系结构和具有GPU的x64体系结构上安装ONNX Runtime.(to get installation instructions for the combination you need. This article will show how to install ONNX Runtime on an x64 architecture with a default CPU and an x64 architecture with a GPU.)
除了可以在许多硬件配置上运行之外,还可以从大多数流行的编程语言中调用运行时.本文的目的是展示如何在C#中使用ONNX Runtime.我将展示如何安装onnxruntime软件包.安装ONNX Runtime后,我将先前导出的MNIST模型加载到ONNX Runtime中,并使用它进行预测.(In addition to being able to run on many hardware configurations, the runtime can be called from most popular programming languages. The purpose of this article is to show how to use ONNX Runtime in C#. I’ll show how to install the onnxruntime package. Once ONNX Runtime is installed, I’ll load a previously exported MNIST model into ONNX Runtime and use it to make predictions.)
安装和导入ONNX运行时(Installing and Importing the ONNX Runtime)
在使用ONNX Runtime之前,您需要安装Microsoft.ML.OnnxRuntime,它是一个NuGet软件包.如果尚未安装.NET CLI,则还需要安装它.以下命令将运行时安装在具有默认CPU的x64体系结构上:(Before using the ONNX Runtime, you will need to install Microsoft.ML.OnnxRuntime which is a NuGet package. You will also need to install the .NET CLI installed if you do not already have it. The following command installs the runtime on an x64 architecture with a default CPU:)
dotnet add package microsoft.ml.onnxruntime
要将运行时安装在带有GPU的x64架构上,请使用以下命令:(To install the runtime on an x64 architecture with a GPU, use this command:)
dotnet add package microsoft.ml.onnxruntime.gpu
一旦安装了运行时,可以使用以下命令将其导入到C#代码文件中(Once the runtime has been installed, it can be imported into your C# code files with the following) using
声明:(statements:)
using Microsoft.ML.OnnxRuntime;
using Microsoft.ML.OnnxRuntime.Tensors;
的(The) using
引入Tensor工具的语句将帮助我们为ONNX模型创建输入,并解释ONNX模型的输出(预测).(statement that pulls in the Tensor tools will help us create inputs for ONNX Models and interpret the output (prediction) of an ONNX model.)
载入ONNX模型(Loading ONNX Models)
以下代码段显示了如何将ONNX模型加载到以C#运行的ONNX运行时中.此代码创建可用于进行预测的会话对象.这里使用的模型是从PyTorch导出的ONNX模型.(The snippet below shows how to load an ONNX model into ONNX Runtime running in C#. This code creates a session object that can be used to make predictions. The model being used here is the ONNX model that was exported from PyTorch.)
这里有几件事值得注意.首先,您需要查询会话以获取其输入.这是通过会话的(There are a few things worth noting here. First, you need to query the session to get its inputs. This is done using the session’s) InputMetadata
属性.我们的MNIST模型只有一个输入参数:784个浮点数组,代表MNIST数据集中的一张图像.如果您的模型具有多个输入参数,则(property. Our MNIST model only has one input parameter: an array of 784 floats that represent one image from the MNIST dataset. If your model has more than one input parameter then) InputMetadata
每个参数都有一个条目.(will have an entry for each parameter.)
Utilities.LoadTensorData();
string modelPath = Directory.GetCurrentDirectory() + @"/pytorch_mnist.onnx";
using (var session = new InferenceSession(modelPath))
{
float[] inputData = Utilities.ImageData[imageIndex];
string label = Utilities.ImageLabels[imageIndex];
Console.WriteLine("Selected image is the number: " + label);
var inputMeta = session.InputMetadata;
var container = new List<NamedOnnxValue>();
foreach (var name in inputMeta.Keys)
{
var tensor = new DenseTensor<float>(inputData, inputMeta[name].Dimensions);
container.Add(NamedOnnxValue.CreateFromTensor<float>(name, tensor));
}
// Run code omitted for brevity.
}
上面的代码未显示用于读取原始MNIST图像并将每个图像转换为784个float数组的实用程序.还可以从MNIST数据集中读取每个图像的标签,以便可以确定预测的准确性.该代码是标准的.NET代码,但仍然鼓励您检出并使用它.如果您需要读入与MNIST数据集相似的图像,它将节省您的时间.(Not shown in the code above are the utilities that read the raw MNIST images and convert each image to an array of 784 floats. The label for each image is also read in from the MNIST dataset so that the accuracy of predictions can be determined. This code is standard .NET code, but you are still encouraged to check it out and use it. It will save you time if you need to read in images that are similar to the MNIST dataset.)
使用ONNX运行时进行预测(Using ONNX Runtime for Predictions)
以下函数显示了如何使用在加载ONNX模型时创建的ONNX会话:(The function below shows how to use the ONNX session that was created when we loaded our ONNX model:)
{
// Load code not shown for brevity.
// Run the inference
using (var results = session.Run(container))
{
// Get the results
foreach (var r in results)
{
Console.WriteLine("Output Name: {0}", r.Name);
int prediction = MaxProbability(r.AsTensor<float>());
Console.WriteLine("Prediction: " + prediction.ToString());
}
}
}
大多数神经网络不会直接返回预测.它们返回每个输出类的概率列表.对于我们的MNIST模型,每个图像的返回值将是10个概率的列表.可能性最高的条目是预测.您可以做一个有趣的测试,将ONNX模型在创建模型的框架中运行时返回的概率与从原始模型返回的概率进行比较.理想情况下,模型格式和运行时的更改不应更改任何产生的概率.这将使每当模型发生更改时都可以运行良好的单元测试.(Most neural networks do not return a prediction directly. They return a list of probabilities for each of the output classes. In the case of our MNIST model, the return value for each image will be a list of 10 probabilities. The entry with the highest probability is the prediction. An interesting test that you can do is compare the probabilities the ONNX model returns to the probabilities returned from the original model when it is run within the framework that created the model. Ideally, the change in model format and runtime should not change any of the probabilities produced. This would make a good unit test that is run every time a change occurs to the model.)
摘要和后续步骤(Summary and Next Steps)
在本文中,我简要介绍了ONNX运行时和ONNX格式.然后,我展示了如何在ONNX Runtime中使用C#加载和运行ONNX模型.(In this article, I provided a brief overview of the ONNX Runtime and the ONNX format. I then showed how to load and run an ONNX model using C# in ONNX Runtime.)
本文的代码示例包含一个工作的控制台应用程序,该应用程序演示了此处显示的所有技术.该代码示例是GitHub存储库的一部分,该存储库探讨了使用神经网络预测MNIST数据集中发现的数字.具体来说,有一些示例显示了如何在Keras,PyTorch,TensorFlow 1.0和TensorFlow 2.0中创建神经网络.(The code sample for this article contains a working console application that demonstrates all the techniques shown here. This code sample is part of a GitHub repository that explores the use of Neural Networks for predicting the numbers found in the MNIST dataset. Specifically, there are samples that show how to create Neural Networks in Keras, PyTorch, TensorFlow 1.0, and TensorFlow 2.0.)
如果您想了解有关导出为ONNX格式和使用ONNX Runtime的更多信息,请查阅本系列的其他文章.(If you want to learn more about Exporting to the ONNX format and using ONNX Runtime, check out the other articles in this series.)
参考文献(References)
- https://docs.microsoft.com/zh-cn/dotnet/core/tutorials/with-visual-studio-code(https://docs.microsoft.com/en-us/dotnet/core/tutorials/with-visual-studio-code)
- https://docs.microsoft.com/zh-cn/nuget/quickstart/install-and-use-a-package-using-the-dotnet-cli(https://docs.microsoft.com/en-us/nuget/quickstart/install-and-use-a-package-using-the-dotnet-cli)
- https://code.visualstudio.com/docs/setup/mac(https://code.visualstudio.com/docs/setup/mac)
- https://docs.microsoft.com/zh-cn/dotnet/core/tools/dotnet(https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet)
- https://microsoft.github.io/onnxruntime/(https://microsoft.github.io/onnxruntime/)
- https://github.com/microsoft/onnxruntime/blob/master/docs/CSharp_API.md#getting-started(https://github.com/microsoft/onnxruntime/blob/master/docs/CSharp_API.md#getting-started)
- https://github.com/keithpij/onnx-lab(https://github.com/keithpij/onnx-lab)
许可
本文以及所有相关的源代码和文件均已获得The Code Project Open License (CPOL)的许可。
Python C# Keras AI 新闻 翻译