[译]英特尔®MKL-DNN:第1部分–库概述和安装
By robot-v1.0
本文链接 https://www.kyfws.com/ai/intel-mkl-dnn-part-library-overview-and-installa-zh/
版权声明 本博客所有文章除特别声明外,均采用 BY-NC-SA 许可协议。转载请注明出处!
- 12 分钟阅读 - 5896 个词 阅读量 0英特尔®MKL-DNN:第1部分–库概述和安装(译文)
原文地址:https://www.codeproject.com/Articles/1182908/Intel-MKL-DNN-Part-Library-Overview-and-Installa
原文作者:Intel Corporation
译文由本站 robot-v1.0 翻译
前言
The Developer’s Introduction to Intel MKL-DNN tutorial series examines Intel MKL-DNN from a developer’s perspective. Part 1 identifies informative resources and gives detailed instructions on how to install and build the library components.
<开发人员英特尔MKL-DNN简介>教程系列从开发人员的角度考察了英特尔MKL-DNN.第1部分标识了有用的资源,并提供了有关如何安装和构建库组件的详细说明.
介绍(Introduction)
随着海量数据集的融合,高度并行的处理能力以及构建越来越智能的设备的推动,深度学习已成为当今计算机科学领域最热门的主题之一.深度学习由(Deep learning is one of the hottest subjects in the field of computer science these days, fueled by the convergence of massive datasets, highly parallel processing power, and the drive to build increasingly intelligent devices. Deep learning is described by) 维基百科(Wikipedia) 作为机器学习(ML)的子集,由对数据中的高级抽象建模的算法组成.如图1所示,机器学习本身就是人工智能(AI)的子集,这是试图模仿人类智能的计算机系统开发中的广泛研究领域.(as a subset of machine learning (ML), consisting of algorithms that model high-level abstractions in data. As depicted in Figure 1, ML is itself a subset of artificial intelligence (AI), a broad field of study in the development of computer systems that attempt to emulate human intelligence.)
图1.深度学习与人工智能的关系.(Figure 1. Relationship of deep learning to AI.)英特尔一直积极参与深度学习领域,尽管对诸如Caffe 和Theano 之类的流行框架进行了优化以充分利用英特尔®架构(IA),并创建了诸如(Intel has been actively involved in the area of deep learning though the optimization of popular frameworks like Caffe and Theano to take full advantage of Intel® architecture (IA), the creation of high-level tools like the) 英特尔®深度学习SDK(Intel® Deep Learning SDK) 面向数据科学家,并向开发人员社区提供软件库,例如(for data scientists, and providing software libraries to the developer community like) 英特尔®数据分析加速库(英特尔®DAAL)(Intel® Data Analytics Acceleration Library (Intel® DAAL)) 和(and) 用于深度神经网络的英特尔®数学内核库(英特尔®MKL-DNN)(Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN)) .(.)
英特尔MKL-DNN是一个开放源代码的性能增强库,用于加速IA上的深度学习框架.对深度学习感兴趣的软件开发人员可能听说过英特尔MKL-DNN,但可能没有机会直接进行探索.(Intel MKL-DNN is an open source, performance-enhancing library for accelerating deep learning frameworks on IA. Software developers who are interested in the subject of deep learning may have heard of Intel MKL-DNN, but perhaps haven’t had the opportunity to explore it firsthand.)
<开发人员英特尔MKL-DNN简介>教程系列从开发人员的角度考察了英特尔MKL-DNN.第1部分标识了有用的资源,并提供了有关如何安装和构建库组件的详细说明.(The Developer’s Introduction to Intel MKL-DNN tutorial series examines Intel MKL-DNN from a developer’s perspective. Part 1 identifies informative resources and gives detailed instructions on how to install and build the library components.) 第2部分(Part 2) 本教程系列的第五部分提供了有关如何配置Eclipse 集成开发环境以构建C ++代码示例的信息,并且还包括源代码演练.(of the tutorial series provides information on how to configure the Eclipse integrated development environment to build the C++ code sample, and also includes a source code walkthrough.)
英特尔®MKL-DNN概述(Intel® MKL-DNN Overview)
如图2所示,英特尔MKL-DNN旨在加速IA上的深度学习框架.它包括高度矢量化和线程化的构建块,用于使用C和C ++接口实现卷积神经网络.(As depicted in Figure 2, Intel MKL-DNN is intended for accelerating deep learning frameworks on IA. It includes highly vectorized and threaded building blocks for implementing convolutional neural networks with C and C++ interfaces.)
图2. IA的深度学习框架.(Figure 2. Deep learning framework on IA.)英特尔MKL-DNN对以下主要对象类型进行操作:基本,引擎和流.这些对象在库中定义(Intel MKL-DNN operates on these main object types: primitive, engine, and stream. These objects are defined in the library) 文件资料(documentation) 如下:(as follows:)
- 原始(Primitive)-任何操作,包括卷积,数据格式重新排序和内存.基元可以将其他基元作为输入,但只能将内存基元作为输出.(- any operation, including convolution, data format reorder, and memory. Primitives can have other primitives as inputs, but can have only memory primitives as outputs.)
- 发动机(Engine)-执行设备,例如,(- an execution device, for example,)中央处理器(CPU).每个原语都映射到特定的引擎.(. Every primitive is mapped to a specific engine.)
- 流(Stream)-执行上下文;您将原语提交到流中并等待其完成.提交给流的基元可能具有不同的引擎.流对象还跟踪基元之间的依赖关系.(- an execution context; you submit primitives to a stream and wait for their completion. Primitives submitted to a stream may have different engines. Stream objects also track dependencies between the primitives.) 典型的工作流程是创建一组原语,将它们推送到流中进行处理,然后等待完成.有关编程模型的其他信息,请参见(A typical workflow is to create a set of primitives, push them to a stream for processing, and then wait for completion. Additional information on the programming model is provided in the) 英特尔MKL-DNN文档(Intel MKL-DNN documentation) .(.)
资源资源(Resources)
Web上有许多有用的信息资源,它们描述了什么是英特尔MKL-DNN,什么不是英特尔MKL-DNN,以及开发人员通过将库与他或她的深度学习项目集成在一起可以期望实现的目标.(There are a number of informative resources available on the web that describe what Intel MKL-DNN is, what it is not, and what a developer can expect to achieve by integrating the library with his or her deep learning project.)
GitHub储存库(GitHub Repository)
英特尔MKL-DNN是一个开源库,可从以下网站免费下载(Intel MKL-DNN is an open source library available to download for free on) 的GitHub(GitHub) ,它被描述为DL应用程序的性能库,其中包括构建模块,用于使用C和C ++接口实现卷积神经网络(CNN).(**, where it is described as a performance library for DL applications that includes the building blocks for implementing convolutional neural networks (CNN) with C and C++ interfaces.)
在GitHub站点上需要注意的重要一点是,尽管英特尔MKL-DNN库包含与(An important thing to note on the GitHub site is that although the Intel MKL-DNN library includes functionality similar to) 英特尔®数学核心函数库(英特尔®MKL)2017(Intel® Math Kernel Library (Intel® MKL) 2017) ,(,)它与API不兼容(it is not API compatible).在撰写本文时,英特尔MKL-DNN版本是一个技术预览,实现了加速图像识别拓扑(如AlexNet 和VGG )所需的功能.(. At the time of this writing the Intel MKL-DNN release is a technical preview, implementing the functionality required to accelerate image recognition topologies like AlexNet and VGG*.*)
英特尔开源技术中心(Intel Open Source Technology Center)
的(The) MKL-DNN | 01.org(MKL-DNN|01.org) 该项目微型站点是称为01.org的英特尔开放源技术中心的成员,该社区由参与各种开源项目的英特尔工程师支持.在这里,您将获得英特尔MKL-DNN项目的概述,有关如何参与并对其发展做出贡献的信息,以及内容丰富的博客,标题为(project microsite is a member of the Intel Open Source Technology Center known as 01.org, a community supported by Intel engineers who participate in a variety of open source projects. Here you will find an overview of the Intel MKL-DNN project, information on how to get involved and contribute to its evolution, and an informative blog entitled) 推出用于深度神经网络的英特尔®数学内核库(Intel®MKL-DNN)(Introducing the Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN)) 肯特`莫法特(Kent Moffat).(by Kent Moffat.)
安装英特尔MKL-DNN(Installing Intel MKL-DNN)
本节详细介绍了(This section elaborates on the installation information presented on the) GitHub存储库(GitHub repository) 提供有关安装和构建英特尔MKL-DNN库组件的详细分步说明,以获取更多信息.您使用的计算机将需要支持Intel®Advanced Vector Extensions 2(Intel®AVX2)的Intel®处理器.具体来说,英特尔®MKL-DNN针对英特尔®至强®处理器和英特尔®至强融核™处理器进行了优化.(site by providing detailed, step-by-step instructions for installing and building the Intel MKL-DNN library components. The computer you use will require an Intel® processor supporting Intel® Advanced Vector Extensions 2 (Intel® AVX2). Specifically, Intel MKL-DNN is optimized for Intel® Xeon® processors and Intel® Xeon Phi™ processors.)
GitHub表示该软件已在RedHat * Enterprise Linux * 7上进行了验证;但是,本教程中介绍的信息是在运行以下系统的系统上开发的(GitHub indicates the software was validated on RedHat* Enterprise Linux* 7; however, the information presented in this tutorial was developed on a system running*) )Ubuntu * 16.04(Ubuntu 16.04 .(.)
安装依赖项(Install Dependencies)
英特尔MKL-DNN具有以下依赖性:(Intel MKL-DNN has the following dependencies:)
- CMake的(CMake) * –用于构建,测试和打包软件的跨平台工具.(** – a cross-platform tool used to build, test, and package software.*)
- 氧气(Doxygen) * –用于从带注释的源代码生成文档的工具.(** – a tool for generating documentation from annotated source code.*) 如果您的计算机上尚未设置这些软件工具,则可以通过键入以下命令来安装它们:(*If these software tools are not already set up on your computer you can install them by typing the following:*)
sudo apt install cmake
sudo apt install doxygen
下载并构建源代码(Download and Build the Source Code)
通过打开终端并输入以下命令,从GitHub存储库克隆英特尔MKL-DNN库:(Clone the Intel MKL-DNN library from the GitHub repository by opening a terminal and typing the following command:)
git clone https://github.com/01org/mkl-dnn.git
注意:如果您的计算机上尚未设置Git ,则可以通过键入以下命令进行安装:(Note: if Git is not already set up on your computer you can install it by typing the following:)
sudo apt install git
安装完成后,您将找到一个名为(Once the installation has completed you will find a directory named)**mkl-dnn(mkl-dnn)**在主目录中.通过输入以下内容导航到该目录:(in the Home directory. Navigate to this directory by typing:)
cd mkl-dnn
如GitHub存储库站点上所述,Intel MKL-DNN使用优化的通用矩阵到Intel MKL的矩阵乘法(GEMM)函数.支持此功能的库也包含在存储库中,可以通过运行(As explained on the GitHub repository site, Intel MKL-DNN uses the optimized general matrix to matrix multiplication (GEMM) function from Intel MKL. The library supporting this function is also included in the repository and can be downloaded by running the)**prepare_mkl.sh(prepare_mkl.sh)**脚本位于(script located in the)**剧本(scripts)**目录:(directory:)
cd scripts && ./prepare_mkl.sh && cd ..
该脚本创建一个名为(This script creates a directory named)**外部(external)**然后将库文件下载并解压缩到名为(and then downloads and extracts the library files to a directory named)mkl-dnn/external/mklml_lnx (mkl-dnn/external/mklml_lnx).(.)
下一条命令从(The next command is executed from the)**mkl-dnn(mkl-dnn)**目录;它创建一个名为(directory; it creates a subdirectory named)**建立(build)**然后运行(and then runs)**CMake的(CMake)**和(and)**使(Make)**生成构建系统:(to generate the build system:)
mkdir -p build && cd build && cmake .. && make
验证构建(Validating the Build)
要验证您的构建,请从(To validate your build, execute the following command from the)**mkl-dnn/build(mkl-dnn/build)**目录:(directory:)
make test
此步骤执行一系列单元测试以验证构建.所有这些测试都应表明(This step executes a series of unit tests to validate the build. All of these tests should indicate)已通过(Passed),处理时间如图3所示.(, and the processing time as shown in Figure 3.)
图3.测试结果.(Figure 3. Test results.)### 图书馆文件(Library Documentation)
英特尔MKL-DNN的文档为(Documentation for Intel MKL-DNN is) 在线可用(available online) .您也可以通过以下命令在您的系统上本地生成此文档:(. This documentation can also be generated locally on your system by executing the following command from the)**mkl-dnn/build(mkl-dnn/build)**目录:(directory:)
make doc
完成安装(Finalize the Installation)
通过执行以下命令来完成Intel MKL-DNN的安装(Finalize the installation of Intel MKL-DNN by executing the following command from the)**mkl-dnn/build(mkl-dnn/build)**目录:(directory:)
sudo make install
下一步将在以下环境下安装开发支持英特尔MKL-DNN的应用程序所需的库和其他组件.(The next step installs the libraries and other components that are required to develop Intel MKL-DNN enabled applications under the)**/usr/本地(/usr/local)**目录:(directory:)
共享库(/(Shared libraries (/)的usr/local/lib(usr/local/lib)):():)
-
libiomp5.so(libiomp5.so)
-
libmkldnn.so(libmkldnn.so)
-
libmklml_intel.so(libmklml_intel.so) 头文件(/(Header files (/)usr/local/include(usr/local/include)):():)
-
mkldnn.h(mkldnn.h)
-
mkldnn.hpp(mkldnn.hpp)
-
mkldnn_types.h(mkldnn_types.h) 文档(/(Documentation (/)的usr/local/share/doc/mkldnn(usr/local/share/doc/mkldnn)):():)
-
英特尔许可和版权声明(Intel license and copyright notice)
-
组成HTML文档的各种文件(在(Various files that make up the HTML documentation (under)/reference/html(/reference/html))())
在命令行上构建代码示例(Building the Code Examples on the Command Line)
GitHub存储库包含C和C ++代码示例,这些示例演示了如何构建由卷积,整流线性单元,局部响应规范化和池化组成的神经网络拓扑块.以下部分描述了如何在Linux中从命令行构建这些代码示例.本教程系列的第2部分演示了如何配置Eclipse集成开发环境以构建和扩展C ++代码示例.(The GitHub repository contains C and C++ code examples that demonstrate how to build a neural network topology block that consists of convolution, rectified linear unit, local response normalization, and pooling. The following section describes how to build these code examples from the command line in Linux. Part 2 of the tutorial series demonstrates how to configure the Eclipse integrated development environment for building and extending the C++ code example.)
C ++示例命令行构建(G ++)(C++ Example Command-Line Build (G++))
要构建C ++示例程序((To build the C++ example program ()simple_net.cpp(simple_net.cpp))包含在英特尔MKL-DNN存储库中,请首先转到() included in the Intel MKL-DNN repository, first go to the)**例子(examples)**目录:(directory:)
cd ~/mkl-dnn/examples
接下来,为可执行文件创建目标目录:(Next, create a destination directory for the executable:)
mkdir –p bin
通过链接共享的英特尔MKL-DNN库并指定输出目录来构建simple_net.cpp示例,如下所示:(Build the simple_net.cpp example by linking the shared Intel MKL-DNN library and specifying the output directory as follows:)
g++ -std=c++11 simple_net.cpp –lmkldnn –o bin/simple_net_cpp
图4.使用G ++的C ++命令行构建.(Figure 4. C++ command-line build using G++.)转到(Go to the)**箱子(bin)**目录并运行可执行文件:(directory and run the executable:)
cd bin
./simple_net_cpp
C使用GCC进行命令行构建的示例(C Example Command-Line Build Using GCC)
要构建英特尔MKL-DNN存储库中包含的C示例应用程序(simple_net.c),请首先转到(To build the C example application (simple_net.c) included in the Intel MKL-DNN repository, first go to the)**例子(examples)**目录:(directory:)
cd ~/mkl-dnn/examples
接下来,为可执行文件创建目标目录:(Next, create a destination directory for the executable:)
mkdir –p bin
通过链接英特尔MKL-DNN共享库并指定输出目录来构建simple_net.c示例,如下所示:(Build the simple_net.c example by linking the Intel MKL-DNN shared library and specifying the output directory as follows:)
gcc –Wall –o bin/simple_net_c simple_net.c -lmkldnn
图5.使用GCC的C命令行构建.(Figure 5. C command-line build using GCC.)转到(Go to the)**箱子(bin)**目录并运行可执行文件:(directory and run the executable:)
cd bin
./simple_net_c
完成后,C应用程序将打印(Once completed, the C application will print either)**通过了(passed)**要么(or)**失败了(failed)**到终端.(to the terminal.)
下一步(Next Steps)
此时,您应该已经成功安装了英特尔MKL-DNN库,执行了单元测试,并构建了存储库中提供的示例程序.在(At this point you should have successfully installed the Intel MKL-DNN library, executed the unit tests, and built the example programs provided in the repository. In) 第2部分(Part 2) 在"英特尔MKL-DNN开发人员简介"中,您将学习如何配置Eclipse集成开发环境以构建C ++代码示例以及该代码的演练.(of the Developer’s Introduction to Intel MKL-DNN you’ll learn how to configure the Eclipse integrated development environment to build the C++ code sample along with a walkthrough of the code.)
许可
本文以及所有相关的源代码和文件均已获得The Code Project Open License (CPOL)的许可。
Go C++ Linux Eclipse machine-learning 新闻 翻译