Vision-Based System Design Part 3 - Behind The Screen

Article Index

Image-Processing Chain 

The image-processing chain consists of both the upstream and downstream elements, and interfaces with the pixel data output by the device interface. However, the pixels received may not be in the correct format to display an image. Image correction may also be needed, particularly if a colour sensor is used.

To maintain throughput at the required data rates, image-processing chains are often implemented in FPGAs to exploit their parallel nature and allow tasks to be pipelined for high frame rates. It is also vital to consider latency in applications like Advanced Driver Assistance Systems (ADAS). Basing the image processing cores around a common protocol simplifies interconnection of the processing IP and establishes an efficient image-processing chain. There are several widely used protocols, and AXI is one of the most common, due to its flexible nature supporting both memory-mapped and streamed interfaces.

Typical processing stages within the image-processing chain are:

• Colour Filter Array: Generation of each pixel’s colour which results from the sensor’s Bayer pattern;

• Colour Space Conversion: Conversion from RGB to YUV used in many image-processing algorithms and output schemes;

• Chroma Re-sampling: Conversion of YUV pixels to a more efficient pixel encoding scheme.

• Image Correction Algorithms: such as colour or gamma correction, or image enhancement and noise reduction.

• On the downstream side the video output timing can be configured and converted back to native parallel-output video format before sent to the required driver.

Some systems also use external DDR memory as a frame store. Often, this is also available to the processor on the SoC, enabling the system supervisor to transfer data over networks like Gigabit Ethernet or USB if required, or to act as an extension to the image-processing chain.

System Supervision 

System supervision is traditionally implemented in the processor, to handle commands for configuring the image-processing chain to the application. To receive and process these commands, the system supervision must be capable of supporting several communications interfaces, from simple RS232, Gigabit Ethernet, USB and PCIe to more specialized interfaces such as CAN. 

If the architecture of the embedded vision system permits, the processor can be used to generate image overlay information which maybe superimposed on the output image. Access to the image data also allows the processor to handle other tasks. 

In The Next Issue

The next article in this series will describe how to build a working vision system using readily-available tools with an off-the-shelf kit containing a Zynq 7020 programmable SoC and a commercial image-sensing module.

For more information, please visit:  https://www.xilinx.com/products/design-tools/embedded-vision-zone.html

 

 

 

BLOG COMMENTS POWERED BY DISQUS

T&M Supplement

The Annual T&M Supplement, sponsored by Teledyne LeCroy, was published in July. Click on the image above to read this exclusive report for free.

Follow us