share article

Share on facebook
Share on twitter
Share on linkedin

Vision-Based System Design Part 3 – Behind The Screen

News

Aaron Behman, Director of Strategic Marketing, Embedded Vision, Xilinx, Inc.

Adam Taylor CEng FIET, Embedded Systems Consultant.

 

The first two articles in this series discussed image-sensor selection and basic signal-processing challenges encountered when designing embedded vision systems. Considering the signal chain in detail, it can be seen to contain a number of common elements for capturing, conditioning, analysing and storing or displaying the images detected by the sensor. From a high-level system perspective, these can be considered within three functional groups.

Interface To The Image Sensor

Depending on the type of sensor, this is interface must provide the required clocking, biases and configuration signals, as well as receiving the sensor data, decoding if necessary, and formatting before passing to the image-processing chain. 

Image-Processing Chain

The image-processing chain receives the image data from the device interface and performs operations such as colour filter array interpolation and colour-space conversion; say, colour to grayscale, for example. 

Any algorithmic manipulation needed is also applied within the image-processing chain. This may range from simple algorithms, such as noise reduction or edge enhancement, to more complex algorithms like object recognition or optical flow. It is common to call the algorithm implementation the upstream section of the image processing chain. The complexity of this section depends on the application. The output formatting, which converts the processed image data into the correct format to be output to either a display or over a communication interface, is referred to as the downstream section. 

System Supervision And Control

This is separate from the sensor-interfacing and image-processing functions, and focuses on two aspects. The first is to handle configuration of the image processing chain, provide analytics on the image and update the image processing chain as required during algorithm execution. The second aspect is the control and management of the wider embedded vision system, which involves taking care of:

o Power management and sequencing of image device power rails;

o Performing self-test and other system management functions;

o Network-enabled or point-to-point communication;

o Configuration of the image device over an I2C or SPI link prior to the first imaging operations.

Some applications may also allow the system supervision to be able to access a frame store and execute algorithms on the frames within. In this case the system supervision is capable of becoming part of the image-processing chain.

Implementation Challenges

The sensor interface, image-processing chain and display or memory interface all require the ability to handle high data bandwidths. The system supervision and control, on the other hand, must be able to process and respond to commands received over the communications interface, and provide support for external communications. If the system supervision is to form part of the image-processing chain as well, then a high-performance processor is required.

To meet such demands, embedded vision systems can be implemented using a combination of a main processor and companion FPGA, or a programmable system-on-chip (SoC) such as a Xilinx Zynq device that closely integrates a high-performance processor with FPGA fabric; see Figure 1. Challenges within each of the three high-level areas influence the individual functions that are needed and how they are implemented. 

Device Interface 

The sensor interface is determined by the selected image sensor. Most embedded vision applications use CMOS Imaging Sensors (CIS), which may have either a CMOS parallel output bus with status flags or alternatively may use high-speed serialized communications to support faster frame rates than are possible using a parallel interface. This can simplify system interfacing at the expense of a more complex FPGA implementation. 

To enable synchronization, it is common to have data channels that contain the image and other data words coupled with a synchronization channel which contains code words defining the content on the data channel. Along with data and synchronization lanes there is also a clock lane as the interface is source synchronous. These high-speed serialized lanes are normally implemented as LVDS or on Reduced Swing LVDS to reduce system noise and power. 

Regardless of the interface, the sensor must usually be configured before any image can be obtained. This is typically done via a general-purpose interface such as I2C or SPI. Implementing this interface in an FPGA not only ensures the high signal bandwidth required, but also allows for easier integration with the image-processing chain. The I2C or SPI sensor-configuration interface may be implemented by either the FPGA or by the system supervision and control processor.

Share this article

Share on facebook
Share on twitter
Share on linkedin

Related Posts

View Latest Magazine

Subscribe today

Member Login