We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

AI Inference Acceleration

  • Lowest latency AI inference
  • Accelerate your whole application
  • Match the speed of AI innovation
ml ai interface acceleration

Lowest Latency AI Inference

High Throughput OR Low Latency

Achieves throughput using high-batch size. Must wait for all inputs to be ready before processing, resulting in high latency.

High Throughput AND Low Latency

Achieves throughput using low-batch size. Processes each input as soon as it’s ready, resulting in low latency.

Accelerate Your Whole Application

Optimized hardware acceleration of both AI inference and other performance-critical functions by tightly coupling custom accelerators into a dynamic architecture silicon device.

This delivers end-to-end application performance that is significantly greater than a fixed-architecture AI accelerator like a GPU; because with a GPU, the other performance-critical functions of the application must still run in software, without the performance or efficiency of custom hardware acceleration.

Match the Speed of AI Innovation

AI Models Are Rapidly Evolving

Adaptable silicon allows Domain-Specific Architectures (DSAs) to be updated,
optimizing the latest AI models without needing new silicon

Fixed silicon devices are not optimized for the latest models due to long development cycles

Vitis AI in the Data Center

Xilinx delivers the highest throughput at the lowest latency. In standard benchmark tests on GoogleNet V1, the Xilinx Alveo U250 platform delivers more than 4x the throughput of the fastest existing GPU for real-time inference. Learn more in the whitepaper: Accelerating DNNs with Xilinx Alveo Accelerator Cards

Vitis AI at the Edge

AI Inference performance leadership with Vitis AI Optimizer technology.

  • 5X to 50X network performance optimization
  • Increase FPS and reduces power

Optimization/Acceleration Compiler Tools

  • Supports networks from Tensorflow and Caffe
  • Compiles networks to optimized Xilinx Vitis runtime

Adaptable and Real-Time AI Inference Acceleration with VitisTM AI

Optimal Artificial Intelligence Inference from Edge to Cloud AI

vitis ai

Xilinx Developer Site

Articles, Documents, Tools and Libraries for AI Inference Acceleration

Stay Informed

Sign up for AI Inference Acceleration updates.

Page Bookmarked
手机购彩网 临海市 湖北省 钟祥市 大同市 兰溪市 湘潭市 松滋市 铁力市 彭州市 厦门市 宜春市 邹城市 彭州市 山东省 金昌市 平度市 海南省 双滦区 忻州市 葫芦岛市 十堰市 平度市 潞城市 临沂市 阜新市 普兰店市 汉川市 兴城市 都匀市 枣庄市 安达市 烟台市 高邮市 梅河口市 江油市 白银市 丰城市 孝义市 石首市 池州市 葫芦岛市 福建省 金华市 梅河口市 胶州市 上虞市 大石桥市 江阴市 吉首市 大庆市 铁力市 深州市 山西省 厦门市 项城市 永州市 原平市 明光市 耒阳市