Vision-Based System Design Part 7 – Leveraging Deep Learning Algorithms in Embedded Vision Systems

Article Index

Giles Peckham, Regional Marketing Director at Xilinx
Adam Taylor CEng FIET, Embedded Systems Consultant

So far, this series of articles about developing embedded vision systems has focused on the image-processing pipeline, which contains functions such as camera or image-sensor interfacing, image reconstruction, and format translation for further processing. These are commonly used in a wide variety of applications, and rely on the same types of algorithms including colour reconstruction (Bayer Filter), colour-space conversion and noise reduction.

Application-specific algorithms, on the other hand, can vary widely and usually demand significantly more development effort. The algorithms can be complex to implement, and use techniques such as object detection and classification, filtering, and computational operations.

Application-Level Processing
If exceptional accuracy is needed, such as in a medical or scientific system to assist life-critical decision making, intensive application-level processing may be done in the Cloud. Conversely, the processing may be done via on-board equipment that prioritises low latency, such as an automotive autonomous-driving system, or a vision guided robot that must quickly process and act on sensor information to navigate within its environment, even when not connected to the Cloud.
Whether executed on an edge device or in the Cloud, today’s embedded vision applications often rely on deep machine learning and artificial intelligence, although in different manners. A Cloud-based implementation will use deep machine learning and Neural Networks to generate a set of image classifiers, and then use these classifiers within its application. An autonomous application will implement its object-detection algorithms using classifiers previously generated by deep machine learning within the Cloud.
It is important that an autonomous system should be able to be upgraded as the product roadmap advances, and power-efficiency and security are also important considerations. All Programmable System on Chip (SoC) or Multi-Processor System on Chip (MPSoC) devices provide a foundation to achieve these goals, and can also implement the image-processing pipeline leveraging parallelism to maximise performance. In addition, the processing cores can be used for higher-level application functionality.

Developing Deep Learning Algorithms for Programmable Logic
Deep learning algorithms are often developed using open-source frameworks like OpenCV and Caffe, which contain pre-defined functions and IP that help simplify the embedded-vision developer’s task. In the case of an autonomous system to be implemented using programmable devices, developers need extra support to be able to use these frameworks within the development ecosystem that surrounds their chosen programmable logic devices. Xilinx has developed the reVISION™ Stack to provide this support for users of the All Programmable Zynq®-7000 and All Programmable Zynq® UltraScale+™ MPSoCs families.
The stack provides all the necessary elements to create high-performance imaging applications, and is optimised for the needs of autonomous, non-Cloud-connected systems. It is organised in three distinct layers (figure 1) to enable platform, algorithm, and application development.

Platform layer
This is the lowest level of the stack and is the one on which the remaining layers of the stack are built. It provides both a hardware definition as well as a supporting software definition via a customised operating system. The hardware definition can define the configuration of either a development or production-ready board such as a System-on-Module. The sensor and system interfaces are defined within the hardware definition. The hardware platform will be captured using Vivado® HLx, and may leverage IP blocks from both Xilinx and third-party suppliers, or specialist IP created using high-level synthesis. This layer will also provide software drivers for IP modules and an updated PetaLinux configuration if required, to support the software-defined environment at the higher level.

Algorithm layer
Development at this level takes place within the Eclipse-based SDSoC™ environment. SDSoC is a system-optimising compiler which allows development using a software-defined environment. It is at this level OpenCV is used to implement the image-processing algorithms for the application at hand. As the software algorithms are developed, bottlenecks in performance are identified and removed by accelerating functions into the programmable logic. This occurs seamlessly, using a combination of High Level Synthesis and a connectivity framework to move a function from executing in software to implementation in the programmable logic. reVISION provides a wide range of acceleration ready OpenCV functions to aid this process. Support is also provided at this level for the most common neural network libraries, including AlexNet, GoogLeNet, SqueezeNet, SSD, and FCN.

Application development
This is where the high-level frameworks such as Caffe and OpenVX are used to complete the application by implementing functionalities such as decision-making. Applications at this level are developed using an Eclipse-based environment targeting the processor cores within the All Programmable Zynq-7000 and All Programmable Zynq UltraScale+ MPSoC.


T&M Supplement

The Annual T&M Supplement, sponsored by Teledyne LeCroy, was published in July. Click on the image above to read this exclusive report for free.

Follow us