share article

Share on facebook
Share on twitter
Share on linkedin

Implementing a neural network with Intel’s NCS2


By Mark Patrick, Technical Marketing Manager, EMEA, Mouser Electronics

For many developers, getting started on their first neural network application can be extremely daunting. There are many hardware considerations to consider, such as the required amount of computing resources and memory bandwidth, before even moving on to software. Thankfully, taking those first exploratory prototyping steps is less of a challenge when using well-documented hardware inference acceleration platforms such as the Intel Neural Compute Stick 2, or NCS2. This is a dedicated hardware accelerator that executes up to four trillion operations per second, optimised for computer-vision applications. Comprising an Intel Movidius Myriad vision processing unit (VPU) and 16 x 128-bit very long instruction word cores, this compact package communicates with its host using a standard USB interface.

Figure 1: Workflow of the Intel OpenVINO toolkit

Getting started

First, Intel recommends a X86_64 high-spec desktop computer as a development host, with the Ubuntu 16.04 operating system; for image capture, use a video camera. The Intel’s OpenVINO toolkit greatly simplifies the development and use of convolutional neural networks (CNNs), widely adopted for image-processing applications, such as facial- or object-recognition.

By providing heterogeneous hardware support across different hardware computing platforms, from CPUs and GPUs to FPGAs and dedicated hardware accelerators, the toolkit speeds up prototyping of edge-based machine-vision applications. Regardless of the hardware platform, the toolkit communicates through a common interface (see Figure 1), optimising a pre-trained CNN model for use by the inference engine.

Comprising three principal components – computer-vision deep-learning capabilities, traditional computer-vision techniques and hardware acceleration – there’s a library of functions and pre-optimised kernels to rely on to speed up development. OpenVINO also includes optimised versions of the OpenCV and OpenVX vision-processing APIs and libraries.

The Python-based OpenVINO model optimiser is accessed via a command line interface, and can import pre-trained models developed by most popular frameworks such as Caffe and TensorFlow. It also allows analysis and configuration of the neural-network model optimisation process to accommodate different hardware inference environments.

Once the hardware environment is ready, the Intel’s ‘Getting Started’ guide gives further instructions to download, install and customise the toolkit, and access a set of test functions to verify communication and successful operation of the Intel NCS2. Demo models are also included; see Figure 2.

Figure 2: Directory listing of the demo models provided with the Intel OpenVINO toolkit

Facial-recognition demo

The demo models may have come from Caffe, TensorFlow or Apache MXNet, pre-optimised by OpenVINO’s model optimiser and supplied as .bin and .xml files, ready for inference. One of these is the “interactive_face_detection_demo”, consisting of three models that can run concurrently. The first is face detection, which presents onscreen metrics together with a bounding box and its unique ID. The second model has been trained to infer the gender and age of each face identified in the video feed; and the third model is used to determine the head pose direction of each face; see Figure 3.

Figure 3: Output of the interactive face detection demo models included in the Intel OpenVINO toolkit

Share this article

Share on facebook
Share on twitter
Share on linkedin

Member Login