We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

Vitis AI

Adaptable and Real-Time
AI Inference Acceleration


Optimal Artificial Intelligence Inference from Edge to Cloud

The Vitis? AI development environment is Xilinx’s development platform for AI inference on Xilinx hardware platforms, including both edge devices and Alveo cards. It consists of optimized IP, tools, libraries, models, and example designs. It is designed with high efficiency and ease of use in mind, unleashing the full potential of AI acceleration on Xilinx FPGA and ACAP.??

Vitis AI Deployment Features

How your development works with AI:

  • Supports mainstream frameworks and the latest models capable of diverse deep learning tasks
  • Provides a comprehensive set of pre-optimized models that are ready to deploy on Xilinx devices. You can find the closest model and start re-training for your applications!
  • Provides a powerful quantizer that supports model quantization, calibration, and fine tuning. For advanced users, we also offer an optional AI optimizer that can prune a model by up to 90%
  • The AI profiler provides layer by layer analysis to help with bottlenecks
  • The AI library offers high-level C++ and Python APIs for maximum portability from edge to cloud.
  • Efficient and scalable IP cores can be customized to meet your needs for many different applications from a throughput, latency, and power perspective

Explore All the Possibilities with Vitis AI

Vitis AI Model Zoo



AI Optimizer

With world-leading model compression technology, we can reduce model complexity by 5x to 50x with minimal accuracy impact. Deep Compression takes the performance of your AI inference to the next level.

Artificial Intelligence Optimizer Block Diagram

Artificial Intelligence Quantizer Block Diagram

AI Quantizer

By converting the 32-bit floating-point weights and activations to fixed-point like INT8, the AI Quantizer can reduce the computing complexity without losing prediction accuracy. The fixed-point network model requires less memory bandwidth, thus providing faster speed and higher power efficiency than the floating-point model.

AI Compiler

Maps the AI model to a high-efficient instruction set and data flow. Also performs sophisticated optimizations such as layer fusion, instruction scheduling, and reuses on-chip memory as much as possible.

Artificial Intelligence Compiler Block Diagram

AI Profiler

The performance profiler allows programmers to perform an in-depth analysis of the efficiency and utilization of your AI inference implementation.

AI Library

The runtime provides a lightweight set of C++ and Python APIs. enabling easy application development. It also provides efficient task scheduling, memory management, and interrupt handling.

Artificial Intelligence Library Block Diagram
Page Bookmarked
手机购彩网 临海市 湖北省 钟祥市 大同市 兰溪市 湘潭市 松滋市 铁力市 彭州市 厦门市 宜春市 邹城市 彭州市 山东省 金昌市 平度市 海南省 双滦区 忻州市 葫芦岛市 十堰市 平度市 潞城市 临沂市 阜新市 普兰店市 汉川市 兴城市 都匀市 枣庄市 安达市 烟台市 高邮市 梅河口市 江油市 白银市 丰城市 孝义市 石首市 池州市 葫芦岛市 福建省 金华市 梅河口市 胶州市 上虞市 大石桥市 江阴市 吉首市 大庆市 铁力市 深州市 山西省 厦门市 项城市 永州市 原平市 明光市 耒阳市