UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

Page Bookmarked

Responsive and Reconfigurable Vision Systems

Leading system developers are using All Programmable Devices in next generation vision guided machine learning systems. To accelerate productivity, Xilinx has created the reVISION Zone to aggregate useful resources for software, hardware and system developers.

For developers who wish to share their reference designs, libraries, and experience, we have also included a section with community projects.

Begin today by exploring this zone and get started building responsive and reconfigurable vision guided systems.

Be among the first to be notified of reVISION news and updates from Xilinx.

View our Customers and Partners who endorse the reVISION Stack.

Differentiating Advantages

More Responsive than typical
SoCs & Embedded GPUs

Reconfigurable to the Latest
Algorithms and Sensors

Software Defined &
Improved Ease of Use

reVISION Enables Responsive and Reconfigurable Vision Systems

More Responsive than Typical SoCs & Embedded GPUs:

  • 6X better images/sec/Watt in machine learning
  • 42X higher frames/sec/Watt for computer vision processing
  • 1/5th the latency

Reconfigurable to the Latest Algorithms and Sensors:

  • Continue to upgrade to the latest machine learning algorithms
  • Support the latest sensor types and connectivity standards
  • Support up to 8K and custom resolutions

Software Defined & Improved Ease of Use:

  • Accelerate development with ready to use OpenCV libraries
  • Leverage any combination of C/C++ and OpenCL languages
  • Develop with popular Machine Learning frameworks including Caffe

Featured Videos

Zynq® All Programmable SoCs and MPSoCs

Hundreds of Xilinx’s Embedded Vision customers target Zynq® All Programmable SoCs and MPSoCs in addition to FPGAs.

Zynq-based Platforms Enable:

  1. Acceleration of computer vision and machine learning algorithms for fast system response
  2. Uniquely provide the reconfigurability required for rapid upgrade to the best available type and mix of sensors
  3. Enable any-to-any connectivity to new machines and/or the cloud
responsive

To address the challenges mentioned above, Xilinx provides the reVISION stack which includes a broad range of development resources for platform, algorithm and application development. 

This includes support for the most popular neural networks, including AlexNet, GoogLeNet, SqueezeNet, SSD, and FCN, the functional elements required to build custom neural networks (CNN/DNN), and leverage pre-defined and optimized CNN implementations for network layers. This is complemented by a broad set of acceleration-ready OpenCV functions for computer vision processing.

For application level development, Xilinx supports popular frameworks including Caffe for machine learning and OpenVX for computer vision (to be released in second half 2017). The reVISION stack also includes development platforms from Xilinx and ecosystem partners based on Zynq SoCs and MPSoCs.

stack

The reVISION stack enables design teams without deep hardware expertise to use a software defined development flow to combine efficient implementations of machine learning and computer vision algorithms into highly responsive systems.  

The reVISION flow starts with a familiar, eclipse-based environment using C, C++ and/or OpenCL languages and associated compiler technology; this is called the SDSoC environment.

Within the SDSoC environment, software and systems engineers can target reVISION hardware platforms, and draw from a pool of acceleration-ready computer vision libraries, and/or the OpenVX framework (late Summer 2017), to quickly build new applications.

For machine learning, popular frameworks like Caffe are used to train a neural network. The Caffe generated .prototxt file is run on an ARM® based scheduler that drives inference processing on pre-optimized implementations of CNN network layers.  

software-env

Expert Xilinx users deploying traditional RTL-based design flows, working with ARM based software developers, spent considerable design time creating highly differentiated machine learning and computer vision applications.  

To further speed design time and reduce the reliance on hardware experts, Xilinx introduced the SDSoC Development Environment, based on C, C++ and OpenCL. While this significantly reduces development cycles, it is not domain specific for Embedded Vision.

Xilinx’s new reVISION stack enables a much broader set of software and systems engineers, with little or no hardware design expertise, to develop intelligent Embedded Vision systems easier and faster. 

rtl

Get started today designing your computer vision system around Zynq SoCs/MPSoCs and FPGAs by leveraging existing Xilinx and ecosystem design hardware, modules and production-ready Systems on Module (SOMs).

Join the discussion on Xilinx forums.

Computer Vision

Being able to implement computer vision algorithms in Zynq SoCs/MPSoCs and FPGAs enables developers to create highly responsive and reconfigurable vision guided systems. These systems are capable of performing at up to 42x frames/sec/Watt better than alternative GPU-based SoCs. Traditionally implementing computer vision algorithms in Zynq and FPGA devices has required a very tight collaboration among software and hardware teams that have limited the ability for software developers to leverage the high performance capability of the technology platform. SDSoC along with the OpenCV library is now opening up the Zynq platform to a whole new group of users. Xilinx will also introduce framework support for OpenVX graph-based design in second half 2017.

OpenCV library functions are essential to developing many computer vision applications. Xilinx’s library for computer vision, based on key OpenCV functions, will allow you to easily compose and accelerate computer vision functions in the FPGA fabric through SDx or HLx environments. In addition, Xilinx library functions are consistent with OpenCV and are optimized for performance, resource utilization and ease of use.

  • Thousands of functions in the OpenCV 3.1 library are available to run on the ARM Cortex™-A9 and Cortex A53 cores in Zynq
  • ~45 OpenCV functions (the OpenVX subset) are available as a library of RTL optimized functions for Xilinx SoCs
  • Complete library user guide with device utilization and performance
  • Support for 1 and 8 pixel parallel versions is available for most functions

reVISION Design Flow for Computer Vision

application
  1. Cross-compile OpenCV application Zynq (ARM A9/A53
  2. Profile and identify bottleneck functions
  3. Minimal changes to the code and set functions to hardware. Compile using SDSoC
  4. Copy generated SW/HW images to SDCard and run on a Zynq board

Library Functions

The functions are grouped into three levels, from simple (left) to more complex (right).

Level 1 Level 2 Level 3
Absolute difference Channel combine Box Scale/Resize Histogram of Oriented Gradients (HOG)
Accumulate Channel extract Gaussian StereoRectify ORB
Accumulate squared Color convert Median Warp Affine SVM (binary)
Accumulate weighted Convert bit depth Sobel Warp Perspective OTSU Thresholding
Arithmetic addition Table lookup Custom convolution Fast corner Mean Shift Tracking (MST)
Arithmetic subtraction Histogram LK Dense Optical Flow
Bitwise: AND, OR, XOR, NOT Gradient Phase Dilate Harris corner Canny edge detection
Pixel-wise multiplication Min/Max Location Erode Remap Image pyramid
Integral image Mean & Standard Deviation Bilateral Equalize Histogram Color Detection
Gradient Magnitude Thresholding StereoLBM

Get started today designing your computer vision system around Zynq SoCs/MPSoCs and FPGAs by leveraging existing Xilinx and ecosystem design hardware, modules and production-ready Systems on Module (SOMs).

Be among the first to be notified of reVISION news and updates from Xilinx.

Join the discussion on Xilinx forums.

Machine Learning

Machine learning and deep learning have gained attention from the development community as a technique that provides enhanced intelligence to many applications including Embedded Vision. While not a new discipline, relatively new breakthroughs in algorithms, access to large data sets for algorithm training and efficient and economically more viable computing platforms have resulted in very rapid interest and adoption of the technology.

Xilinx’s Zynq SoCs/MPSoCs are an ideal fit for machine learning, achieving 6X better images/sec/Watt in machine learning inference relative to embedded GPUs and typical SoCs. Xilinx’s reVISION Stack removes traditional design barriers by allowing you to quickly take a trained network and deploy it on Zynq SoCs and MPSoCs for inference.

Features:

  • Full software stack for deploying machine learning applications
  • Hardware optimized libraries supporting Conv, ReLU, Pooling, Dilated conv, Deconv, FC, Detector & Classifier, SoftMax layers
  • Caffe inter-operability allows easy porting from Proto-Text files for Network definition and trained weights
  • Optimized reference models available for a wide range of network topologies, such as AlexNet, GoogLeNet, SqueezeNet, FCN and SSD
  • Networks can be customized through software running on the ARM processor without lengthy compilation
application

Deploying Networks

  1. Import .prototxt and trained weights
  2. Call prototxt runtime API in your application
  3. Cross-compile for Cortex-A53 and run on a board

Get started today designing your computer vision system around Zynq SoCs/MPSoCsand FPGAs by leveraging existing Xilinx and ecosystem design hardware, modules and production-ready Systems on Module (SOMs).

Be among the first to be notified of reVISION news and updates from Xilinx.

Join the discussion on Xilinx forums.

Connectivity & Sensor Support

The AI revolution has accelerated the development and evolution of sensor technologies across numerous categories. It has also resulted in a mandate for a new level of sensor fusion, combining multiple types of sensors in different combinations to create a full and complete view of the system’s environment and objects in that environment. Whatever sensor configuration is specified today, or implemented tomorrow, needs to be ‘future proofed’ through hardware reconfigurability. Only Xilinx All Programmable devices offer this level of reconfigurability.

Zynq-based vision platforms offer robust any-to-any connectivity and sensor interfaces. Zynq sensor and connectivity advantages include:

  • Up to 12x more bandwidth relative to alternative SoCs currently in the market, including support for native 8K and custom resolutions
  • Significantly more high and low bandwidth sensor interfaces and channels, enabling highly differentiated combinations of sensors such as RADAR, LiDAR, accelerometers and force torque sensors
  • Industry leading support for the latest data transfer and storage interfaces, easily reconfigured for future standards

Get started today designing your machine learning-based system around Zynq SoCs/MPSoCs and FPGAs by leveraging existing Xilinx and ecosystem design hardware, modules and production-ready Systems on Module (SOMs).

Be among the first to be notified of reVISION news and updates from Xilinx.

Join the discussion on Xilinx forums.

Sensor Category Zynq SoCs/MPSoCs Advantages vs. Alternatives
MIPI Interfaces / Camera Support 96 MIPI Lanes
18-48 cameras w/ up to 8K

8x more bandwidth
Only 8K option
Custom resolutions

Video Interfaces HDMI 2.0 (Input/Output), DisplayPort 1.2/1.4, 12G-SDI, MIPI-DSI Multitude of 4K interfaces combinable to support 8K and custom resolutions

High Bandwidth Sensors:
RADAR, LiDAR…

48x: CAN/CAN-FD, 1GbE (AVB), SPI

More Smart Sensor interfaces than any other SOC
(up to 48 channels)

Low Bandwidth Sensors: Accelerometer, Force torque…

I2C, UART, GPIO

More channels of lower bandwidth IO than any other SOC

Data Transfer & Storage Interfaces USB 2.0, 3.0, PCIe Gen 1.0/2.0/4.0 (PL) x4, x8, 10GE, SATA 3.1, NAND/NOR, SD/eMMC Industry leading support for high bandwidth data transfer and storage interfaces

The reVISION stack is targeted to both the Zynq SoCs and MPSoCs. Xilinx and its ecosystem members produce several boards that can enable your development with the reVISION stack. The following is a list of boards supported by the stack including production ready system on modules (SOMs).

Base Zynq Board ZCU102 ZCU104 ZC702 ZC706
Device ZU9 (16nm) ZU7 (16nm) Z7020 (28 nm) Z7045 (28nm)
CPU Quad Cortex A53 up to 1.5GHz Dual Cortex A9 up to 1.0GHz
Peak GOPS @ INT8 7857 5386 571 2331
On-chip Memory (MB) 4.0 4.8 0.6 2.4
Inputs USB3, MIPI, HDMI USB3, MIPI, HDMI HDMI* HDMI*
Outputs HDMI, DisplayPort HDMI, DisplayPort HDMI HDMI
Video Codec Units No 4K60 Encode/Decode No No
reVISION Support xFopencv, xFdnn xFopencv, xFdnn xFopencv, xFdnn xFopencv, xFdnn
Sensor Input Sony IMX274 Quad OnSemi AR0231 StereoLab Zed Stereo eCon camera
Spec 3840x2160 @ 60 FPS 1920x1080 @ 30 FPS 3840x1080 @ 30 FPS 1920x1080 @ 60 FPS
Interface MIPI via FMC MIPI via FMC USB3 USB3

*Requires an HDMI IO FMC card

Camera Modules & FMCs

  • Most HDMI and USB camera sources supported
  • Single (Sony IMX274) sensor (available May 2017) and quad (ON Semi AR023) sensor FMCs (planned November 2017)
  • StereoLabs Zed camera module
Board Name Description Vendor
PicoZed Kit (Zynq-7000) PicoZed™ is a highly flexible, rugged, System-On-Module, or SOM that is based on the Xilinx Zynq®-7000 All Programmable SoC. It offers designers the flexibility to migrate between the 7010, 7015, 7020, and 7030 Zynq-7000 All Programmable SoC devices in a pin-compatible footprint. Avnet
MicroZed Kit (Zynq-7000) MicroZed™ is a low-cost development board based on the Xilinx Zynq®-7000 All Programmable SoC. Its unique design allows it to be used as both a stand-alone evaluation board for basic SoC experimentation, or combined with a carrier card as an embeddable system-on-module (SOM). Avnet
UltraZed-EG SOM (Zynq UltraScale+ MPSoC) UltraZed-EG™ SOM is a highly flexible, rugged, System-On-Module (SOM) based on the Xilinx Zynq® UltraScale+™ MPSoC. Designed in a small form factor (2.0” x 3.5”), the UltraZed-EG SOM packages all the necessary functions such as:
  • System memory
  • Ethernet
  • USB
  • Configuration memory needed for an embedded processing system
Avnet
Mercury+ XU1 (Zynq UltraScale+ MPSoC) The Mercury+ XU1 system-on-chip (SoC) module combines Xilinx's Zynq UltraScale+™ MPSoC (ZU6/9/15) with fast DDR4 ECC SDRAM, eMMC flash, quad SPI flash, dual Gigabit Ethernet PHY, dual USB 3.0 and an RTC and thus forms a complete and powerful embedded processing system.
(Contact for availability)
Enclustra
Mercury ZX1 (Zynq-7000) The Mercury ZX1 system-on-chip (SoC) module combines Xilinx’s Zynq-7000 (Zynq-7030/35/45) All-Programmable SoC device device with fast DDR3 SDRAM, NAND flash, quad SPI flash, a Gigabit Ethernet PHY, dual Fast Ethernet PHY and an RTC and thus forms a complete and powerful embedded processing system.
(Contact for availability)
Enclustra
Atlas I-Z7e (Zynq-7000) The Atlas-I-Z7e™ is a low power, small form factor system-on-a-module (SoM) featuring the Xilinx® Zynq™-7000 All-Programmable SoC. The Zynq-7000 architecture consists of a dual core 800 MHz ARM® Cortex™-A9 and 28nm programmable logic inside a single chip. iVeia
Atlas II-Z7x (Zynq-7000) The Atlas-II-Z7x™ is a high-performance, small form factor processing module featuring the Zynq™-7000 All-Programmable SoC from Xilinx®. The Zynq devices integrate dual ARM® Cortex™-A9 processors with a 28-nm FPGA into a single device. iVeia
Atlas II-Z8 (Zynq UltraScale+ MPSoC) The Atlas-II-Z8 System-on-a-Module (SoM) is an advanced high performance heterogeneous computing architecture on a module the size of a credit card. iVeia
Atlas III-Z8 (Zynq UltraScale+ MPSoC) The Atlas-III-Z8 hosts up to the largest Zynq® UltraScale+ MPSoC device and supports two additional banks of flexible I/O. An Atlas-III baseboard slot can support an Atlas-II device. iVeia
TE0726 (Zynq-7000) The TE0726 “Zynqberry” is a Rasberry Pi compatible FPGA module integrating a Zynq-7010, 512 MByte DDR3L SDRAM, 4 USB ports, an Ethernet port and 16 MByte Flash memory for configuration und operation. Trenz Electronic
TE0715 (Zynq-7000) The TE0715 is an industrial-grade Zynq-7000 SoM with 4 MGT Links, a Gigabit Ethernet transceiver, 1 GByte DDR3 SDRAM with 32-bit width, 32 MByte QSPI Flash memory and powerful switch-mode power supplies for all on-board voltages. Trenz Electronic
TE0720 (Zynq-7000) The TE0720 is an industrial-grade SoC module integrating a Zynq-Z020, a gigabit Ethernet transceiver (physical layer), 8 GBit (1 GByte) DDR3 SDRAM with 32-bit width, 32 MByte Flash memory for configuration and operation, and powerful switch-mode power supplies for all on-board voltages. Trenz Electronic
TE0808 (Zynq UltraScale+ MPSoC) The TE0808-03 is an industrial-grade MPSoC module integrating a Zynq UltraScale+, 2 GByte (4 x 512 MByte) DDR4 SDRAM with 64-Bit width, 64 MByte (2 x 32 MByte) Flash memory for configuration and operation, 20 Gigabit transceivers, and powerful switch-mode power supplies for all on-board voltages. Trenz Electronic
Arty Z7-20 (Zynq-7000) The Arty Z7 is a ready-to-use development platform designed around the Zynq-7000™ All Programmable System-on-Chip (AP SoC) from Xilinx. The Zynq-7000 architecture tightly integrates a dual-core, 650 MHz ARM Cortex-A9 processor with Xilinx 7-series Field Programmable Gate Array (FPGA) logic. This pairing grants the ability to surround a powerful processor with a unique set of software defined peripherals and controllers, tailored by you for whatever application is being conquered. Digilent, Inc.
ZyngDVP The ZingDVP embedded vision kit is built on the Zynq-7045 SoC with seamlessly integrated HDMI (or Camera Link) input and output. ZingDVP enables customers to create well differentiated and powerful designs in the area of Machine Vision, VR, and Video Analyzing. V3 Technology
EagleGo HD (Zynq-7000) The EagleGo HD embedded vision kit is built on the Zynq-7000 All Programmable SoC with seamlessly integrated ARM Cortex-A9 Processors and FPGA logic. EagleGo HD enables customers to create well differentiated and powerful designs in the area of Industrial Control, Machine Vision, Video image processing, and Test and Measurement. V3 Technology
ZURA SOM* (Zynq UltraScale+ MPSoC) The ZURA “Zynq UltraScale+ for Radio & ADAS” system on module features the ZU3EG-SFVA625. (planned release Q1 CY17)
(Contact for availability)
V3 Technology
Eiger Eiger is a Zynq-7020 based, low-cost, small factor intelligent camera. It connects to various sensor modules and provides image processing solutions in both HW and SW. Development environment Eiger-EMU is also included. Regulus

Embedded Vision is an increasingly complex and multidisciplinary domain. Having expert support to augment your project can help ensure a successful product introduction. Xilinx has curated a qualified set of design services companies that have extensive experience in embedded vision system design. These alliance program member companies have gone through Xilinx’s certification program to ensure they are suitable to support your system development needs.

Member Region Description IP Boards Software
Digital Design Corporpation (DDC) North America DDC, a Xilinx Premier Design Services Partner, specializes in cutting edge embedded solutions - particularly high bandwidth, high complexity, or extremely small size, weight, and power (SWaP) designs. DDC provides solutions that range from system specification to turnkey boards to IP blocks optimized to fit into the smallest possible part. DDC engineers average over 20 years of experience and leverage a large collection of DDC created and deployed Intellectual Property (IP) “building blocks” - these blocks have proven system performance, shorten cycle time, and reduce risk.
Fidus North America As the inaugural North American member of Xilinx’s Premier Design Services program, Fidus is at the forefront of Xilinx-based video solutions development. We merge sensors and video, offering differentiated services to the Embedded Vision market. Fidus recently developed an augmented reality solution that required live image sensor data to be merged with a video feed, and displayed real-time. This development required knowledge of Xiinx’s IP video cores, image sensor operation, and Northwest Logic’s MIPI CSI/DSI IP.
Hardent, Inc. North America Hardent is a professional services firm providing electronic design services, training solutions, IP products and management consulting to leading electronics equipment and component manufacturers throughout the world. Hardent's team of experienced electronic design engineers bring customers the skills and specializations needed to quickly get over technical hurdles and send products to market.
Regulus Japan A member of the Xilinx Alliance Program, Regulus in Japan offers design services for embedded vision and video processing across a wide range of applications such as intelligent cameras, machine vision, and autonomous vehicles (e.g. Drones). We have developed many in-house IP cores, reference boards, mass-production camera boards, and we have designed and delivered numerous customer projects.
Libertron Korea Since founded in 1998, Libertron has been a leading provider of design service, proprietary product and training, specializing in professional logic implementation on FPGA and system level development service. Mainly we engage projects related to high speed data transfer, control unit based on Xilinx embedded processors for various equipment and various algorithm implementations on FPGAs. Also we provide FPGA logic design as well as board design service including embedded software porting etc. and various ready-made FPGA boards for development and education to companies as well as universities. As a Xilinx ATP (Authorized Training Provider), we provide various training services related to FPGA & embedded system.
Missing Link Electronics North America Missing Link Electronics (MLE) is a Silicon Valley based technology company with offices in San Jose, CA and Neu-Ulm, Germany. We have been enabling key innovators in the automotive, government and aerospace, industrial, and test & measurement markets to build better Embedded Systems, faster.
OKI IDS Japan A Premier Design Service member of the Xilinx Alliance Program, OKI IDS has designed and delivered numerous embedded vision and video projects based on Xilinx All Programmable FPGAs and SoCs. Using it's expert capabilities, OKI IDS recently converted a legacy C/C++ based system to Zynq-7000 and accelerated computation intensive Moving-Object Detection algorithms in the FPGA fabric.
Omnitek EMEA Omnitek are a leading supplier of embedded vision and video IP, design services. Our core strength is algorithm design and optimum implementation on the various compute engines of Xilinx’s all programmable devices. Our solutions cover all stages of the image processing pipeline from camera sensor input to display output. The benefits of small footprint IP include lower system cost and the highest performance per watt, enabling us to outperform ASIC/ASSP and GPU solutions.
V3 Technology China A Certified Design Service member of the Xilinx Alliance Program, V3 Technology offers embedded vision and video solutions and services based on Xilinx FPGAs and SoCs. V3 solutions include Zynq-7000 & MPSoC development boards, SOMs with pre-built Linux/Android operating environments, and reference designs. V3 has successfully enabled cutting-edge machine learning customers to quickly get to market by using its design services.
Xylon EMEAI A Premier member of the Xilinx Alliance Program, Xylon is provider of intellectual property (IP) cores, advanced design solutions for Xilinx FPGA and All Programmable SoC and related design services. Since its establishment in 1995, Xylon has designed and delivered more that 300 Vision and Video All Programmable based designs based Xilinx technology and Xylon logiBRICKS IP Cores.
Member Type Description
ArrayFire Computer Vision, Machine Learning ArrayFire is an industry leader in high performance computing software development and coding services. ArrayFire specializes in developing software solutions to help engineers, researchers and scientists leverage the capabilities of FPGAs and other accelerators to solve complicated computing problems in a range of industries, including defense and intelligence, life science, oil and gas, finance, manufacturing, media and others.
The MathWorks Computer Vision, Machine Learning, Sensor Fusion Algorithm development is central to image processing and computer vision because each situation is unique, and good solutions require multiple design iterations. MathWorks provides a comprehensive environment to gain insight into your image and video data, develop algorithms, and explore implementation tradeoffs. Statistics and Machine Learning Toolbox™ provides functions and apps to describe, analyze, and model data. You can use descriptive statistics and plots for exploratory data analysis, fit probability distributions to data, generate random numbers for Monte Carlo simulations, and perform hypothesis tests. Regression and classification algorithms let you draw inferences from data and build predictive models.
MulticoreWare Machine Learning MulticoreWare provides the entire range of Convolutional Neural Networks (CNN) related services including: (1) Porting infrastructure frameworks such as Torch7 and Caffe to Xilinx platforms (2) Data labelling: creation of labelled data sets for training Neural Networks (3) CNN classifiers for faces, lip movement, human speech, vehicles, pedestrians and video artifacts (4) Domain specific applications for broadcast, automotive, traffic monitoring and surveillance.
Avnet Computer Vision, Sensor Fusion Embedded vision is the merging of two technologies: embedded systems and machine vision (also sometimes referred to as computer vision). An embedded system is any microprocessor-based system that isn’t a general-purpose computer. Embedded systems are ubiquitous: they’re found in automobiles, kitchen appliances, consumer electronics devices, medical equipment, and countless other places. Machine Vision is the use of digital processing and intelligent algorithms to interpret meaning from images or video.
Concurrent EDA Computer Vision, Machine Learning, Sensor Fusion Concurrent EDA uses its own proprietary design automation technology to develop a full line of pre-built, high-performance, fully verified FPGA cores for a wide range of image processing, signal processing, data processing, security, and matrix math applications.

Design Examples for Machine Learning and Computer Vision

The reVISION Stack includes four initial design examples (with more to come) that are intended to get you up-and-running in a very short period of time. These design examples will help you easily see the distinct advantage Xilinx All Programmable SoCs have in high performance Embedded Vision applications. The following is a brief description of these four design examples.

  • LK Dense Optical Flow @ 4K60 - Real-time dense implementation of optical flow, detecting object motion for every single pixel. This example uses non-iterative, non-pyramidal implementation on 4K60 FPS input coming from a Sony IMX274 sensor via the MIPI interface
  • Stereo Vision - Real-time stereo disparity map calculation including remap, rectification and local block matching. It can process dual 1080p30 stereo camera input via USB3
  • Deep Learning: GoogleNet - GoogleNet benchmark with INT8 demonstrated using standard ImageNet inputs.
  • Autonomous Perception Design - Combination of dense optical flow, stereo vision and deep learning example designs (the three described above). Using combination of MIPI sensors and USB3 cameras, the design represents a real-life use case of autonomous vision system by combining dense optical flow, stereo vision and CNN network into a single design example

These design examples will be available in May 2017. Get started today designing your computer vision system around Zynq SoCs/MPSoCs and FPGAs by leveraging existing Xilinx and ecosystem design hardware, modules, and production-ready Systems on Module (SOMs).

Be among the first to be notified of reVISION news and updates from Xilinx.

Join the discussion on Xilinx forums.

Knowledge Center

Resource Description
Papers & Tutorials A collection of application notes, whitepapers, tutorials and user guides
Xilinx Embedded Vision Videos Various demonstrations and videos on embedded vision
Xcell Daily and Featured Blogs Daily blog articles from Xilinx and Industry
Powered By Xilinx Showcase of Products Enabled by Xilinx Technology
Forums Xilinx Community Forums

Get started today designing your computer vision system around Zynq SoCs/MPSoCs and FPGAs by leveraging existing Xilinx and ecosystem design hardware, modules and production-ready Systems on Module (SOMs).

Be among the first to be notified of reVISION news and updates from Xilinx.

Join the discussion on Xilinx forums.