AI Tools Selector Guide

ID 785444
Updated 4/12/2024
Version 2024
Public

author-image

By

Use the AI Tools Selector for a seamless installation process of the Intel AI Tools, tailored to your preferred distribution channel and package selection. You have the following options to choose from:

  • Presets
  • Custom selection of packages
  • Offline installer

Before proceeding with the installation of AI Tools, make sure your system meets the necessary System Requirements.

Set Up System Before Installation

Before installing AI tools via conda or pip, you need to set up your system as described below.

conda

Install conda

  1. Download the appropriate Miniconda Installer for Linux* OS:
    wget -q https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
  1. In your terminal, run:
    sh Miniconda3-latest-Linux-x86_64.sh
  1. Delete the downloaded file:
    rm Miniconda3-latest-Linux-x86_64.sh
  1. (Optional) To speed up execution, use libmamba as the solver, which is the default in the latest conda distribution (miniconda or Anaconda). For older conda installations, use this command to set libmamba:
    conda config --set solver libmamba
    To verify that libmamba is set:
    conda config --show

To learn more about conda installation, see the Conda Linux installation instructions.

Create Environment

Create and activate a virtual environment:

conda create -n example_env -y
conda activate example_env

Using oneAPI Collective Communications Library Bindings for PyTorch* and Intel® Extension for PyTorch*

Set environment variables for Intel® oneAPI Collective Communications Library (oneCCL) to run distributed workloads using oneCCL Bindings for PyTorch* with the Intel® Extension for PyTorch* via conda:

source $(python -c "import oneccl_bindings_for_pytorch as torch_ccl;print(torch_ccl.cwd)")/env/setvars.sh

pip

Install Intel® oneAPI Base Toolkit (Ubuntu 22.04)

  1. Download the Intel® oneAPI Base Toolkit from the official Intel® repository:
    wget https://registrationcenter-download.intel.com/akdlm/IRC_NAS/fdc7a2bc-b7a8-47eb-8876-de6201297144/l_BaseKit_p_2024.1.0.596_offline.sh
  1. Install the Intel® oneAPI Base Toolkit using the bash script:
    sh ./l_BaseKit_p_2024.1.0.596_offline.sh -a --silent --eula accept
  1. Depending on whether you want to use the Intel® Extension for PyTorch* or Intel® Extension for TensorFlow*, you need to set the environment variables with one of these scripts:

    Intel® Extension for TensorFlow*

    source /opt/intel/oneapi/compiler/latest/env/vars.sh  
    source /opt/intel/oneapi/mkl/latest/env/vars.sh  
    source /opt/intel/oneapi/ccl/latest/env/vars.sh  
    source /opt/intel/oneapi/mpi/latest/env/vars.sh
    

    Intel® Extension for PyTorch*

    source /opt/intel/oneapi/compiler/latest/env/vars.sh  
    source /opt/intel/oneapi/mkl/latest/env/vars.sh  

Install Python (Ubuntu 22.04)

  1. Install Python from apt, if you are a root user, you do not need sudo:
    sudo apt-get update && apt install python3.10
  1. Verify Python is installed:
    which python3
  1. Install venv to create a virtual environment:
    apt install python3.10-venv

     

Create Environment

Create and activate a virtual environment:

python3.10 -m venv example_env
source example_env/bin/activate

Install GPU Drivers 

If you use Intel GPU, you need to install the GPU drivers separately as described in Intel® Software for General Purpose GPU Capabilities.

Install AI Tools

Presets

The presets represent combinations of AI tools for easy installation and use. The following presets are available for integration into your workflow:

  • Data Analytics: Uncover valuable insights about your business and customers using libraries and tools optimized for Intel architectures. Make informed, data-driven decisions with enhanced performance.
  • Classical ML: Accelerate your Machine Learning and Data Science pipelines with the power of open libraries optimized for Intel architectures. Enhance the efficiency and speed of your ML tasks.
  • Deep Learning: Boost the performance of your single node and distributed Deep Learning workloads on Intel hardware with Intel's optimizations for TensorFlow and PyTorch.
  • Inference Optimization: Reduce model size and improve the speed of your deep learning inference deployments on Intel hardware.

Install a Preset

To use one of the preset packages, follow these steps:

  1. Go to the AI Tools Selector and click the preferred preset package name from the left sidebar. The tools included in the package are displayed as checked options while all other tools are dimmed automatically and not available for selection.
  2. Choose the version of the Intel® Optimized Python for installation via conda or Docker container.
  3. Choose either conda, pip, or Docker as your preferred distribution type.
  4. Make sure you set up the environment as described on the right panel of the selector under the installation command.
  5. Copy and execute the provided installation command in your Terminal.

Click the Customize button to switch to a custom selection of packages.

Accessing Docker Hub By accessing, downloading, or using this software and any required dependent software (the "Software Package"), you agree to the terms and conditions of the software license agreements for the Software Package, which may also include notices, disclaimers, or license terms for third-party software included with the Software Package. Refer to the licensing information for additional details.

Custom Selection of Packages

For more personalized package selection, you can install individual AI tools using conda or pip:

  1. Go to the AI Tools Selector and click the Customize button on the left sidebar.
  2. Select the appropriate Intel® Optimized Python version. Note that this selection is available only for installation via conda and will be automatically dimmed for pip.
  3. Choose either conda or pip as your preferred package type.
  4. Click the checkboxes of the required tools, frameworks, SDKs, and/or CLIs. Note that certain combinations may be unavailable and will be automatically dimmed.
  5. Make sure you set up the environment as described in Set Up System Before Installation.
  6. Copy and execute the provided installation command in your Terminal.

Offline Installer

The AI Tools offline installer is a set of pre-built conda environments with the Intel optimization packages for Python. It contains the following environments:

Environment Main Component
base Intel® Distribution for Python*
modin Intel® Distribution of Modin*
pytorch Intel® Extension For PyTorch*
pytorch-gpu Intel® Extension For PyTorch*
tensorflow Intel® Extension For TensorFlow*
tensorflow-gpu Intel® Extension For TensorFlow*

NOTE

To use GPU environments, make sure the corresponding GPU drivers are installed as described in Install GPU Drivers.

To install the AI tools via the offline installer:

  1. Go to the AI Tools Selector and click the Offline installer button on the left sidebar.
  2. Follow the download and installation instructions on the right-hand side of the selector.

After installation, configure the system as described in Get Started with the AI Tools.

Verify Installation

Use the commands below to verify that the AI tools are properly installed:

AI Tool Command
Intel® Extension for PyTorch* (CPU) python -c "import torch; import intel_extension_for_pytorch as ipex; print(torch.__version__); print(ipex.__version__);"
Intel® Extension for PyTorch* (GPU) python -c "import torch; import intel_extension_for_pytorch as ipex; print(torch.__version__); print(ipex.__version__); [print(f'[{i}]: {torch.xpu.get_device_properties(i)}') for i in range(torch.xpu.device_count())];"
Intel® Extension for TensorFlow* (CPU) python -c "import intel_extension_for_tensorflow as itex; print(itex.__version__)"
Intel® Extension for TensorFlow* (GPU) python -c "from tensorflow.python.client import device_lib; print(device_lib.list_local_devices())"
Intel® Optimization for XGBoost* python -c "import xgboost as xgb; print(xgb.__version__)”
Intel® Extension for Scikit-learn* python -c "from sklearnex import patch_sklearn; patch_sklearn()”
Intel® Distribution of Modin* python -c "import modin; print(modin.__version__)”
Intel® Neural Compressor python -c " import neural_compressor as inc; print(inc.__version__)"

Next Steps

After successful installation, refer to the following resources to start using the installed product(s):

  • AI Tools Getting Started Samples demonstrate the use of AI Tools and provide instructions on how to run sample code.
  • AI Reference Kits catalog provides over 30 kits with trained machine learning and deep learning models. Each kit includes model code, training data, instructions for the machine learning pipeline, libraries, and Intel® oneAPI components.
  • AI Reference Models (formerly Model Zoo) repository contains links to pre-trained models, sample scripts, best practices, and step-by-step tutorials for many popular open-source machine learning models optimized by Intel.
  • Working with Preset Containers document provides more information about preset containers and instructions on how to run them.

Known Issues

  • When installing Intel® Distribution of Modin* with other products using customized installation via conda, compatibility issues may arise. Install Intel® Distribution of Modin* separately with the following command:
    conda install -c intel/label/oneapi -c intel -c conda-forge modin-all --override-channels
  • When installing packages via conda, you may get a "ClobberError", which occurs when conda detects that a file or directory will be overwritten. This error does not impact the functionality of the installed packages and can be safely ignored.