Alternatively, performing processing in The Cloud requires the embedded vision system to be capable of capturing the image and transmitting it using network-enabled technology. This approach can be suitable for applications such as medical imaging or scientific research, where processing can be very intensive and real-time results are not required.
To implement the processing chain, the heart of an embedded vision system requires a processing core which is capable of not only controlling the selected image sensor but also receiving, implementing the image processing pipeline and transmitting the images over the desired network infrastructure, or to the chosen display. These demanding requirements often result in a selection of a FPGA or as in more and more cases an All Programmable System on a Chip.
Xilinx Zynq All Programmable SoCs combine two high-performance ARM A9 processors with FPGA fabric. The Processor System (PS) can be used to communicate with a host over Gigabit Ethernet, PCIe or other interfaces like CAN while also performing general system housekeeping. The Programmable Logic (PL) section exploits the parallel nature of FPGA fabric to receive and process the images extremely efficiently.
If the images must be transmitted over a network, on-chip Direct Memory Access (DMA) controllers can be used to efficiently move image data from the PL to DDR memory in the PS. Once within the PS DDR memory it can also be accessed using DMA controllers of the selected transport medium. It is worth noting that the A9 processors can be used to perform further processing on the image within the PS DDR, and that the Zynq architecture also allows processed images to be moved from the PS DDR back into the image pipeline in the PL, thus giving maximum flexibility to choose the most efficient processing strategy. Figure 2 illustrates the tight integration of processing, memory control and interface functions within the Zynq device.
Following the sensor-selection guidance given in the first part of the series, this article has described a number of technologies, frameworks and devices that can be used to help satisfy stringent size, weight, power and cost (SWaP-C) constraints on high-performance embedded vision systems for demanding applications.
For more information, please visit: https://www.xilinx.com/products/design-tools/embedded-vision-zone.html