Developer Tools & Tutorials
Please get more information in the following sections:
Intel® OpenVINO™ is an open-source toolkit for optimizing and deploying deep learning models.
Intel® Extension for PyTorch* is a library extends PyTorch* with the latest performance optimizations for Intel hardware.
Intel® LLM Library for PyTorch* is an LLM optimization library which accelerates local LLM inference and fine-tuning on Intel hardware.
Intel® oneAPI is a unified programming model that enables developers to write code that can be executed on a variety of hardware accelerators.
Intel® Extension for OpenXLA* is an extension for OpenXLA* that provides seamless run of JAX models on Intel GPU.
Also, you can find some advice for arranging heterogeneous computing through different workloads in the following section:
Some of the models used in Embodied Intelligence solutions are enabled on Intel platforms, please see here to get tutorials: