For low-latency ML inference, Xilinx delivers leadership throughput and power efficiency. In standard benchmark tests on GoogleNet V1, Xilinx U250 delivers more than 4x the throughput of the fastest GPU at real-time inference. Learn more in the whitepaper: “Accelerating DNNs with Xilinx Alveo Accelerator Cards”.
xDNNv3 will be available in November 2018, in the ML Suite. Get started today with the ML Suite featuring xDNNv2 from the links below.Learn More
* See White Paper for performance details
Work through this self-paced tutorial using the Xilinx ML Suite to deploy models for real-time inference on Amazon EC2 F1 FPGA instances. During this lab you will use Python APIs to accelerate your ML applications with Amazon EC2 F1 instances powered by Xilinx FPGAs.
ML Inference performance leadership with CNN pruning technology.
Optimization/Acceleration Complier Tools
reVISION Knowledge Center
Documentation, Resources, Papers, Tutorials for Edge InferenceLearn More
Acceleration Knowledge Center
Documentation, Resources, Papers, Tutorials for Cloud InferenceLearn More
Xilinx University Program
Enabling the use of Xilinx technologies for academic teaching and researchLearn More