Giles Peckham, Regional Marketing Director at Xilinx,
and Adam Taylor CEng FIET, Embedded Systems Consultant
In this increasingly interconnected world, there are several malicious attackers who wish to exploit vulnerabilities in embedded vision systems for nefarious purposes. Should the attackers be successful, depending upon the application, this could have serious results such as loss of life or the release of sensitive information. Therefore, should a developer’s embedded vision system be compromised, this security breach could have a significant impact, ranging from reputational damage to legal and regulatory repercussions.
To protect against malicious attackers, an embedded vision system should be subjected to a threat analysis during its design phase. This threat analysis performed early in the design cycle, prior to starting the detailed design, ensures the system and its information remain secure in operation.
This threat analysis will consider different elements of the design, its data sensitivity and the different methods in which the system can be attacked. Of course the sensitivity of the embedded vision system and its data will vary depending upon its application. For example, a military system may contain more sensitive aspects than a commercial surveillance system. However, in the commercial space, more complex applications such as ADAS or autonomous robotic vision systems will contain several sensitive elements.
As such, the threat analysis will consider elements including:
• Application – Is the application mission or life critical? What is the end effect if the device security is compromised?
• Data – The criticality of the information stored within the system.
• Deployment – Is the system remotely deployed or used within a controlled environment?
• Access – Both physical and remote, does the system allow access remotely for control, maintenance or updates? If so, how does the application verify the access is authorised?
• Communication Interfaces – Is information transmitted to or from the system critical? Should the application be concerned about eavesdroppers snooping? Does the equipment need to be able to protect against advanced attacks for example reply attacks?
• Reverse Engineering – Does the embedded system contain Intellectual Property or other sensitive design techniques which must be protected?
The results of this threat analysis are used by the engineering design team to implement strategies within the design which address these identified threats. At a high level, addressing the identified threats can be categorised into one of the following approaches:
• Information Assurance – Ensuring information stored within the system and its communications are secure. This also needs to address identity assurance which ensures access to the unit is from a trusted source. For example, when communicating and controlling its operation or updating application software in the field.
• Anti-Tamper – Ensures the system can protect itself from external attacks to access the system and its contents.
• Run Time – Ensures the system is protected during run time as it implements its application.
Typically, information assurance requires the use of encryption to protect both its stored data and communications. Commonly used encryption algorithms include the Advanced Encryption Standard (AES). AES is a block cypher which encrypts blocks of 128 bytes, with key sizes of 128, 192 or 256-bits. There are alternatives to AES for different applications, for example Simon and Speck developed by the National Security Agency for low power, computationally limited Internet of Things (IoT) applications.
Cryptography can also be used to digitally sign information. This enables the receiving system to verify the identity of the sender or ensure encrypted messages have not been changed. Digital signatures are achieved using public key encryption like RSA, and hashing algorithms like SHA3. The first stage in creating a signature is to use the hashing algorithms to create a fixed length hash value for an input of arbitrary length.
The resultant hash is encrypted using the private key of the sender and communicated or appended as the signature. The receiving entity, generates a hash of the information received using the same algorithm and encrypts with the senders’ public key. If both the calculated and received signatures agree, then it is known who sent or created the information and that it was not modified. As such, a digital signature is very important to verify the integrity of software during both configuration operation and field update for embedded systems.
Along with encryption to create a more secure IA solution, test access ports such as the JTAG port must also be protected once the system is deployed. This limits the ability of attackers to read back or modify data and programs if they gain physical access to the unit.
Preventing physical access and therefore modification of the system is where the anti-tamper solution is deployed. Anti-tamper techniques are used to protect a wide area of the embedded vision system. While each system and its threats are different, a common anti-tamper approach will monitor system voltage rails and temperatures to ensure an attacker can’t manipulate the temperature or apply out-of-specification voltages. Such approaches have been used by third parties to obtain unexpected behaviour in embedded systems which presented security vulnerabilities.
Implementing a security solution
When it comes to creating the secure electronic architecture for an embedded vision system, both the Zynq®-7000 SoC and Zynq® UltraScale+™ MPSoC provide the necessary building blocks for a secure system. Often these devices are used in conjunction with the reVISION™ acceleration stack which enables the use of high level development frameworks such as OpenCV and Caffe.These inbuilt facilities provided by the silicon and configuration stage enable the implementation of anti-tamper functions and secure configuration, which helps address the information assurance and anti-tamper requirements.
The remaining security solution is implemented at run-time and is used to protect data in memories, peripherals and system level control registers. Protecting these can prevent illegal memory accesses, configuration changes and malware injection. Protection mechanisms include encryption, functional isolation, Trustzone and hypervisors, while the application can implement permissions-based user accounts and secure tokens.
To secure memories and communication, encryption is used. Many encryption algorithms can be accelerated within the programmable logic of the Zynq-7000 or Zynq UltraScale+ MPSoC. However, implementing these algorithms using a hardware description language increases the development time.
Using a system optimising compiler such as SDSoC™, enables the developer to specify the algorithm using a high-level language like C or C++. This enables development of the security solution at a higher level, and then acceleration of bottleneck functions into the programmable logic.
AES is a symmetrical algorithm which uses the same key for both encryption and decryption. The AES algorithm can be computationally intensive, requiring substitutions with a defined S Box, Matrix Multiplications and several shift operations. As such, implementing AES encryption or decryption in a CPU can become a processing bottleneck.
Implementing AES encryption using SDSoC enables a significant acceleration in the performance when accelerated into the programmable logic for each of the supported operating systems.
There is an increase in the number of threats which face the embedded vision developer. Performing a threat analysis can help identify the actual threats to the system, enabling the design team to create a design which negates the threat vectors. Devices like the Zynq-7000 and Zynq UltraScale+ MPSoC support security solutions with embedded silicon features and support for secure configuration. Run time solutions such as Trustzone and isolation, can be implemented along with creating encryption engines using SDSoC, which aligns with a reVISION development flow.
One of the advantages of embedded vision systems is their ability to observe wavelengths outside those which are visible to humans. This enables the embedded vision system to provide superior performance across a range of applications from low light vision to scientific imaging and analysis.
While imaging systems at higher wavelengths including X Ray and Ultraviolet are used for scientific applications such as astronomy, it is IR wavelengths which are most often deployed in industrial, automotive and security applications. As IR imagers sense the background thermal radiation, they require no scene illumination and provide the ability to see in total darkness, making them ideal for automotive and security applications, while within the industrial sphere, IR systems can also be used in thermo graphic applications where they accurately measure the temperature of the scene contents. For example, in renewable energy, thermal imagers can be combined with drones to monitor the performance of solar arrays and detect early failures due to the increasing temperature of failing elements.
Working outside the visible range requires the correct selection of the imaging device technology. If the system operates within the near-IR spectrum or below, developers can use devices such as Charge Coupled Devices (CCDs) or CMOS (Complementary Metal Oxide Semiconductor) Image Sensors (CIS). However, as developers move into the infrared spectrum they need to use specialized IR detectors.
The need for specialized sensors in the IR domain is in one part due to the excitation energy required for silicon based imagers such as CCD or CIS. These typically require photon energy of 1eV to excite an electron but at IR wavelengths photon energies range from 1.24 meV to 1.7eV. As such, IR imagers tend to be based upon HgCdTe or InSb. These have lower excitation energies and are often combined with a CMOS readout IC called a ROIC to control and readout the sensor.
IR systems fall into two categories, cooled and uncooled. Cooled thermal imagers use image sensor technology based upon HgCdTe or InSb semiconductors. To provide useful images a thermal imager requires the use of a cooling system to reduce the temperature of the sensor to 70 to 100 Kelvin. This is required to reduce the generated thermal noise to below that which is generated by the scene contents. Using a cooled sensor brings with it an increased complexity, cost and weight for the cooling system, the system also takes time (several minutes) to reach operating temperature and generate a useable picture.
Uncooled IR sensors can operate at room temperatures and use micro bolometers in place of an HgCdTe or InSb sensor. A micro bolometer works by each pixel changing resistance when IR radiation strikes it. This resistance change defines the temperatures in the scene. Typically, micro bolometer-based thermal imagers have much-reduced resolution when compared with a cooled imager. However, they do make thermal-imaging systems simpler, lighter and less costly to create.
For this reason, many industrial, security and automotive applications use uncooled image sensors like the FLIR Lepton.
Creating an uncooled thermal imager presents a range of challenges for embedded vision designers, requiring a flexible interfacing capability to interface with the select device and display, while providing the processing capability to implement any additional image processing upon on the video stream. Of course, as many of these devices are hand held or power constrained, power efficiency also becomes a significant driver.
The FLIR Lepton is a thermal imager which operates in the long wave IR spectrum, it is a self-contained camera module with a resolution of 80 by 60 pixels (Lepton 2) or 160 by 120 pixels (Lepton 3). Configuration of the Lepton is performed by an I2C bus while the video is output over SPI using a Video over SPI (VoSPI) protocol. These interfaces make it ideal for use in many embedded systems which require the ability to image in the IR region.
One example combines the Lepton with a Xilinx Zynq Z7007S device mounted on a MiniZed development board. As the MiniZed board supports WiFi and Bluetooth it is possible to create both IIoT / IoT applications and traditional imaging solutions with a local display, in this case a 7-inch touch display. This example will create a design which interfaces with the FLIR Lepton and outputs the video on a local display.
To create a tightly integrated solution, designers can use the processing system (PS) of the All Programmable Zynq SoC to configure the Lepton using the I2C bus. The PS can also provide an interface to the radio module for WiFi and Bluetooth communications for future upgrades, which add wireless connectivity, while the programmable logic is used to receive VoSPI, perform direct memory access with DDR and output video for a local display. The high-level architecture of the solution is demonstrated within figure 2.
Within the image processing pipeline, designers can instantiate custom image processing functions generated using High Level Synthesis or use pre-existing IP blocks such as the Image Enhancement core which provides noise filtering, edge enhancement and halo suppression.
This high-level architecture requires translation into a detailed design within Vivado, as such the following IP blocks are used to create the hardware solution.
• Quad SPI Core – Configured for single mode operation, receives the VoSPI from the Lepton.
• Video Timing Controller – Generates the video timing signals for the output display.
• VDMA – Reads an image from the PS DDR into a PL AXI Stream.
• AXI Stream to Video Out – Converts the AXI Streamed video data to parallel video with timing syncs provided by the Video Timing Core.
• Zed_ALI3_Controller – Display controller for the 7-inch touch screen display.
• Wireless Manager – Provides interfaces to the Radio Module for Bluetooth and WiFi. While not used in this example, including this module within the HW design means addition of wireless communications requires only additional SW development.
When these IP blocks are combined with the Zynq processing system and the necessary AXI interconnect IP, developers obtain a detailed hardware design.
Most of the IP blocks included within the Vivado design require configuration using application software developed within SDK. This provides the flexibility to change the operational parameters required as the product evolves, for example accommodating a larger display or changing sensor from the Lepton 2 to the Lepton 3. For this example, no operating system is required, the application software configures the video timing from the video timing controller (800 pixels by 480 Lines), along with configuring the video direct memory access controller, to read frames from the memory mapped DDR and convert it into an AXI Stream to be compatible with the image processing stream.
Following the initialisation of the IP blocks the applications software performs the following:
• Configures the FLIR Lepton to perform automatic gain control.
• Synchronises with the VoSPI data to detect the start of a valid frame.
• Applies a Digital Zoom to scale up the image to utilise efficiently the 800 pixels by 480-line display. This can be achieved by outputting each pixel either 8 or 4 times depending upon the sensor selection.
• Transfers the frame to the DDR Memory. As the FLIR Lepton only outputs 8-bit data when ACG is enabled, this is mapped to the green channel of the RGB display.
When the completed programme is executed on the MiniZed with the FLIR Lepton connected and outputting to a 7-inch touch sensitive display, the output of the FLIR can be seen very clearly.
Imaging outside the visible range provides significant benefits and is used across several applications, although each application requires careful selection of the sensor technology. This article has demonstrated how an uncooled thermal imaging solution can be quickly and easily created using a cost-optimised Zynq SoC designed with Xilinx IP cores.
You can find the reference design here
For more information, please visit: http://www.xilinx.com/products/design-tools/embedded-vision-zone.html