Both homogenous and heterogeneous approaches can be demonstrated in an All Programmable SoC. While the sensor types will be different for both applications the end objective of both architectures is to place two data sets within the processing system DDR memory, while maximising the performance provided by the programmable logic fabric.
Considering the homogeneous approach first, the resulting implementation is a stereoscopic vision system in which each channel uses a CMOS imaging sensor. A major advantage is that only one image-processing chain needs to be developed: the same design can be instantiated twice within the programmable logic fabric for both image sensors. This enables a significant saving in development costs, even though the algorithms for calculating parallax require intensive processing.
One of the most important requirements in such a system is to synchronise the two image-processing chains. When implementing the chains in parallel within the programmable logic fabric, this requirement can be met by applying the same clock to each chain, subject to appropriate constraints.
The architecture of the homogenous approach shows the two image-processing chains, which are based predominantly upon available IP blocks. Image data is captured using a bespoke sensor-interface IP module and converted from parallel format into streaming AXI. This allows for an easily extensible image-processing chain, the results from which can be transferred into the PS DDR using the high-performance AXI interconnect combined with video DMA.
If a heterogeneous implementation using differing sensor types is considered, this could combine the image-sensor object-detection architecture described earlier with RADAR to perform distance detection. There are two options for implementing the RADAR: a pulsed approach (Doppler) or a continuous wave. The best option will depend upon the requirements for the final application; however, both will follow a similar approach.
The RADAR implementation can be considered in two parts: signal generation including a high-speed digital-to-analogue converter to produce a continuous-wave or pulsed signal, and signal reception using a high-speed analogue-to-digital converter to capture the received continuous-wave or pulsed signal. When it comes to signal processing, both approaches will utilise FFT-based analysis implemented with the programmable logic fabric; the resultant data sets can be transferred to the PS DDR using DMA.
For either implementation, the fusion algorithm for both datasets is performed with the PS using software. It is worth noting that designers often find fusion algorithms can impose intensive demand for processing bandwidth. One option available to create higher performance is to utilise the power of the SDSoC™ design environment.
SDSoC enables software functions to be transferred seamlessly between the processor and the programmable logic of a SoC, using Vivado® HLS and a connectivity framework. Both are transparent to the software developer. The use of HLS to develop the processing chains of both homogenous and heterogeneous implementations can be extended further to create a custom SDSoC platform for the chosen implementation and then use SDSoC to harness uncommitted logic resources to further accelerate the performance of the overall embedded vision system.
As markets for embedded vision systems continue to grow, and as image-detection and other types of sensors become more readily available and affordable, system designers need increasingly fast and efficient sensor fusion methodologies. All Programmable FPGAs and SoCs can simplify implementation and synchronisation of multi-channel processing chains, while System and High-Level Synthesis tools help ensure increased system performance and on-time design completion.
For more information, please visit: http://www.xilinx.com/products/design-tools/embedded-vision-zone.html
BLOG COMMENTS POWERED BY DISQUS
- << Prev