Vision-Based System Design Part 3 - Behind The Screen

Article Index

Aaron Behman, Director of Strategic Marketing, Embedded Vision, Xilinx, Inc.

Adam Taylor CEng FIET, Embedded Systems Consultant.

 

The first two articles in this series discussed image-sensor selection and basic signal-processing challenges encountered when designing embedded vision systems. Considering the signal chain in detail, it can be seen to contain a number of common elements for capturing, conditioning, analysing and storing or displaying the images detected by the sensor. From a high-level system perspective, these can be considered within three functional groups.

Interface To The Image Sensor

Depending on the type of sensor, this is interface must provide the required clocking, biases and configuration signals, as well as receiving the sensor data, decoding if necessary, and formatting before passing to the image-processing chain. 

Image-Processing Chain

The image-processing chain receives the image data from the device interface and performs operations such as colour filter array interpolation and colour-space conversion; say, colour to grayscale, for example. 

Any algorithmic manipulation needed is also applied within the image-processing chain. This may range from simple algorithms, such as noise reduction or edge enhancement, to more complex algorithms like object recognition or optical flow. It is common to call the algorithm implementation the upstream section of the image processing chain. The complexity of this section depends on the application. The output formatting, which converts the processed image data into the correct format to be output to either a display or over a communication interface, is referred to as the downstream section. 

System Supervision And Control

This is separate from the sensor-interfacing and image-processing functions, and focuses on two aspects. The first is to handle configuration of the image processing chain, provide analytics on the image and update the image processing chain as required during algorithm execution. The second aspect is the control and management of the wider embedded vision system, which involves taking care of:

o Power management and sequencing of image device power rails;

o Performing self-test and other system management functions;

o Network-enabled or point-to-point communication;

o Configuration of the image device over an I2C or SPI link prior to the first imaging operations.

Some applications may also allow the system supervision to be able to access a frame store and execute algorithms on the frames within. In this case the system supervision is capable of becoming part of the image-processing chain.

Implementation Challenges

The sensor interface, image-processing chain and display or memory interface all require the ability to handle high data bandwidths. The system supervision and control, on the other hand, must be able to process and respond to commands received over the communications interface, and provide support for external communications. If the system supervision is to form part of the image-processing chain as well, then a high-performance processor is required.

To meet such demands, embedded vision systems can be implemented using a combination of a main processor and companion FPGA, or a programmable system-on-chip (SoC) such as a Xilinx Zynq device that closely integrates a high-performance processor with FPGA fabric; see Figure 1. Challenges within each of the three high-level areas influence the individual functions that are needed and how they are implemented. 

Device Interface 

The sensor interface is determined by the selected image sensor. Most embedded vision applications use CMOS Imaging Sensors (CIS), which may have either a CMOS parallel output bus with status flags or alternatively may use high-speed serialized communications to support faster frame rates than are possible using a parallel interface. This can simplify system interfacing at the expense of a more complex FPGA implementation. 

To enable synchronization, it is common to have data channels that contain the image and other data words coupled with a synchronization channel which contains code words defining the content on the data channel. Along with data and synchronization lanes there is also a clock lane as the interface is source synchronous. These high-speed serialized lanes are normally implemented as LVDS or on Reduced Swing LVDS to reduce system noise and power. 

Regardless of the interface, the sensor must usually be configured before any image can be obtained. This is typically done via a general-purpose interface such as I2C or SPI. Implementing this interface in an FPGA not only ensures the high signal bandwidth required, but also allows for easier integration with the image-processing chain. The I2C or SPI sensor-configuration interface may be implemented by either the FPGA or by the system supervision and control processor.


Image-Processing Chain 

The image-processing chain consists of both the upstream and downstream elements, and interfaces with the pixel data output by the device interface. However, the pixels received may not be in the correct format to display an image. Image correction may also be needed, particularly if a colour sensor is used.

To maintain throughput at the required data rates, image-processing chains are often implemented in FPGAs to exploit their parallel nature and allow tasks to be pipelined for high frame rates. It is also vital to consider latency in applications like Advanced Driver Assistance Systems (ADAS). Basing the image processing cores around a common protocol simplifies interconnection of the processing IP and establishes an efficient image-processing chain. There are several widely used protocols, and AXI is one of the most common, due to its flexible nature supporting both memory-mapped and streamed interfaces.

Typical processing stages within the image-processing chain are:

• Colour Filter Array: Generation of each pixel’s colour which results from the sensor’s Bayer pattern;

• Colour Space Conversion: Conversion from RGB to YUV used in many image-processing algorithms and output schemes;

• Chroma Re-sampling: Conversion of YUV pixels to a more efficient pixel encoding scheme.

• Image Correction Algorithms: such as colour or gamma correction, or image enhancement and noise reduction.

• On the downstream side the video output timing can be configured and converted back to native parallel-output video format before sent to the required driver.

Some systems also use external DDR memory as a frame store. Often, this is also available to the processor on the SoC, enabling the system supervisor to transfer data over networks like Gigabit Ethernet or USB if required, or to act as an extension to the image-processing chain.

System Supervision 

System supervision is traditionally implemented in the processor, to handle commands for configuring the image-processing chain to the application. To receive and process these commands, the system supervision must be capable of supporting several communications interfaces, from simple RS232, Gigabit Ethernet, USB and PCIe to more specialized interfaces such as CAN. 

If the architecture of the embedded vision system permits, the processor can be used to generate image overlay information which maybe superimposed on the output image. Access to the image data also allows the processor to handle other tasks. 

In The Next Issue

The next article in this series will describe how to build a working vision system using readily-available tools with an off-the-shelf kit containing a Zynq 7020 programmable SoC and a commercial image-sensing module.

For more information, please visit:  https://www.xilinx.com/products/design-tools/embedded-vision-zone.html

 

 

 

BLOG COMMENTS POWERED BY DISQUS

T&M Supplement

The Annual T&M Supplement, sponsored by Teledyne LeCroy, was published in July. Click on the image above to read this exclusive report for free.

Follow us